Very simply stated: A.I. is super hot right now.
Yet, what is not so hot is for journalists and other creators of content to do their very best to make sure the information being spread out there is as accurate as possible. We have seen a very good example of that just recently with the whole "Facebook's A.I. creating a new language" farce.
A good rule of thumb a the moment is to mentally replace the words "artificial intelligence" with "machine learning" at the moment, and educate yourself on the difference.
Once you have this little adjustment taken care of, it will be much easier to distinguish between machine learning models that are performing a task, often more efficiently than any human could ever do, bounded by the parameters of this one task, and more generalistic (true) artificial intelligence, which should perform more like a human would, or at least that is what many people hope to achieve.
It is here where I have set my question marks in the past, as it is my belief that humans and machines are very different from one another.
Not so long ago I read an amazing article, The Empty Brain, which directed a huge spotlight on the elephant in the room, which is the way we humans think of our own brains, and inherently our intelligence.
It is facinating to see the writer take us on a little journey through history, and the evolution of thinking about intelligence, and really makes you doubt everything you think you know.
After all, I once had somebody argue to me that the human brain is "Turing complete"...
Far be it from me to rigerously throw the opinion of highly regarded people such as Demis Hassabis aside, what I do not understand is why his thoughts on looking towards neuroscience for answers are considered a new concept.
I often mention spikng neural networks in times like these, and I think it is well worth checking out the work being done on currently the largest artificial brain simulator: Spaun.
I used to be a real believer in spiking neural networks, even though their practical application is minimal at the moment, I do think they will mature and become highly efficient in performing tasks, maybe limited in scope, maybe more generalistic.
Time will tell on that one.
My second reservation has to do with the nature of biology versus digital systems.
The question arises: Will digital technology translate in any way shape or form to biological systems?
It is not hard to see that the two are completely different, and this is not just a matter of "throw some more processing power at it."
Another question I have is way simpler: Why?
What really is the point to modelling human like intelligence?
On the one hand we have people looking into building "neural laces" and the likes, to make sure we can augment human intelligence to keep up with the machines of the future, posing that human intelligence is limited in bandwidth, yet we want to model this inferior intelligence in machines for some reason.
I do understand though, there probably is value in investigating how things work inside our own brains, and learn the lessons we need in the future, yet I feel like focussing on systems that are way more suited to the digital realm will be much more effective way sooner, while still be able to emulate human like intelligence, for those times they need to interact with the low-badwidth monkey-brain.
Anyway, these are just some (low-bandwidth) thoughts, and if you have any insight on this, drop a comment below, because I don't think there is one true answer to this at the moment.
We should all be part of this discussion, and it is probably something that should not be left to the researchers alone, because human intelligence spans us all.
Follow me on social media to get updates and discuss A.I., and I have recently created a slack team to see if people are interested to forming a little A.I. community,
Twitter : @ApeMachineGames
Slack : join the team
facebook : person / facebook group