Artificial Intelligence: A Guide for Thinking Humans
ByManjila
I picked up this book from the library as it is right down my alley. This book is not very technical, and it describes AI in a very easy and understanding manner. It discusses the AI’s progress so far to imitating human thinking capability and what might happen in near future. It is a great book, and I thoroughly enjoyed it.
Key Takeaways
- Common-sense knowledge is the knowledge that all humans have but is not written down anywhere, much of the knowledge is subconscious, we don’t even know we have it. This includes our core intuitive knowledge of physics, biology, and psychology, which underlies all our broader knowledge of the earth.
- An essential part of human intelligence is the ability to perceive and reflect on one’s own thinking. This is called metacognition.
- In the search for robust and, general intelligence deep learning may be hitting a wall.
- In a post written by Adrejan Karpathy, the deep learning, and computer vision expert who now directs AI efforts at Tesla, he mentions the only way to build a computer that can interpret scenes (understand a photo or video) as we do is to allow them to get exposed to all the years of experience we have, ability to interact with the world and some magical active learning architecture that I can barely imagine when I think backward about what it should be capable of.
- According to Jackie DeMario, former chief engineer for autonomous vehicles at Ford motor company, When we talk about Autonomy for a vehicle, it is autonomous within a geofence, the geofence is an area where we have a defined high definition map that shows everything in that area.
- Economist Sendhill Mullainathan in writing about the dangers of AI cited the long-tail phenomenon in his notion of ‘tail-risk’ as “We should be afraid of not of intelligent machines. But of machines make decisions that they do not have the intelligence to make. Machine stupidity creates a tail risk. The machine can make many good decisions and one day fail spectacularly on a tail event that did not appear in the training data. This is the difference between specific and general intelligence.“
- Even the humans that train deep neural networks generally can’t look under the hood and provide an explanation for the decisions that their networks make. The fear is that if we don’t understand how AI systems work, we can’t really trust them or predict the circumstances under which they will make errors.
- Marvin Minsky’s “Easy things are hard”. The first 90 percent of a complex technology project takes 10 percent of the time and the remaining 10 percent takes 90 percent of the time. The same can be expected in the field of AI. We have Siri and Alexa to help in our daily routine but AI machines with all human capability look like a possibility only in the far future.