Facebook ai new language

Facebook's A.I. Did Not Invent A New Language

Facebook has led people to think that they have taken their experimental chatbot program offline because it started to invent its own language. The truth is far less sensational...

So, Facebook researchers have taken their machine learning algorithm offline, which as far as I can tell was being used as an experimental chatbot program.
Of course most of the media is jumping on this story with the usual sensationalism, and are quick to connect this to the remarks made by Elon Musk, and Mark Zuckerberg, trying desparately to use this as fodder to determine which one was right, and which one was wrong.
The answer to that last part, by the way, is: Both, and neither.


The newsletter will keep you informed of interesting projects in the fast-growing field of A.I. and machine learning.
Of course, we promise we will not spam you, and no shameless self-promotion.

PRO TIP: Highlight any text to share a quote easily!

This is crazy: I can not believe the amount of hype this is getting at the moment, and it seems you can not even turn your back for five minutes at the moment, or somewhere a new media explosion will start with inaccurate information about artificial intelligence.

And, when I say artificial intelligence, I obviously mean the misuse of the term artificial intelligence.

Of course, if you go digging deep enough, you might find some real machine learning researchers, and maybe even the people directly connected to this project, talking more level-headed about this particular subject, but let's be honest, most people get their information from the big media sites.
You know, the ones who are not well versed on the topic, but instead just rewrite articles from their peers.

In any case, I wanted to cover this because a lot of my friends on social media who know I am working within this sector are sharing this story with me at the moment, and I may as well get my 2 cents out there, so I don't have to respond to every single share.

The beginning of the joke goes something like this: Two chatbots walk into a research lab...

Let's have a quick look at how the two bots were communicating with each other

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

The log above should look somewhat familiar to anyone who has ever experimented with character level recurrent neural networks, and is a clear sign of either insufficient training, or a bug in the way the algorithm tracks/builds context within a sentence.
This type of repetative pattern is well documented in the issue trackers of many character level recurrent neural network code examples out there.

It all depends really on what the intention of the algorithm is, and if you will permit me to make a somewhat educated guess here, I think they might be experimenting with a chatbot that dynamically trains, meaning it should get better over time, but there is of course one very big hurdle to overcome when doing so, the effects of which can be seen here.

You see, no machine learning algorithm has achieved a 100% accuracy rate, and there will always be some margin of error within the predicted outputs, so if you permit such inaccuracy in your training cycles, over time these error rates will start to compound.
I believe that is what we are really seeing here, the compound effect of inaccuracy within the network, and the network dynamically training itself using this data.
It would make sense that quite quickly it would seem that the machine learning algorithm is "inventing" its own language, as this is just a "natural" way for the network to work around its issues, and there is really nothing scary or weird about that.

Neural Network are designed to find balance within themselves, and to (hopefully) stabalize enough so they become efficient at a task, and that is exactly what it is trying to do here.
Yet, we should not even talk about "trying" because even that conjures up way too much of a personality for something that is, in the end, just a compound function, quite basic math to be honest, that through a total complexity built out of many relatively simple elements achieves something that feels a lot more impressive than it really is.

The reason that researchers took it offline has probably very little to do with the fact that it was drifting off course from human understandable language, or that this is a scary prospect (for as long as we can still pull the plug on these things), but more with the fact that it will be hard to study and debug problems within the network if humans can not understand where it is messing up.
We need to understand natural language based neural networks to be able to see what is wrong, and where to make improvements.

The fact is that natural language is a very difficult subject to capture strongly in current day machine learning technology, and it will take a little while before this starts improving significantly, so expect way more sensationalist articles to follow over the coming few years, as researchers are trying to solve this problem.

Follow me on social media to get updates and discuss A.I., and I have recently created a slack team to see if people are interested to forming a little A.I. community,
Twitter  : @ApeMachineGames
Slack    : join the team
facebook : person / facebook group


Elon Musk vs. Artificial Intelligence?

By now you are probably aware of Elon Musk's performance at the National Governors Association's meeting, where he proceeded to express his deep concerns about artificial intelligence, and that there is a need for regulation and for researchers and companies to "slow down."

No doubt about it, things are moving fast, and some people calling themselves experts are even saying that we are moving a lot faster than earlier predictions by other experts. Whether that is true or not, I think we can all agree that we need to start actively and openly discussing the safety concerns surrounding A.I.

Meanwhile, we also need to start identifying the true valuable resources in this field when it comes to expertise, because sure: Elon Musk may have invested in start OpenAI, but that does not make him an expert, and after his remarks recorded on video, most of the truly important machine learning researchers have spoken out against him.

Elon Musk is first and foremost a businessman, and we need to realize that when taking in his words, and question what his motivations are when he say he thinks there should be more regulation, and other (competing) companies should slow down their research.


3 Reasons Why We Should Skip Human Level A.I.

For the most part researchers think that before we can have artificial super intelligence, we need to go past human intelligence first. But is this a good idea?

Machine learning get started

HOWTO: Get Started With Machine Learning! (4 Tips)

So you want to know more about machine learning, and how to get started?
Maybe you want to get into the field of technical artificial intelligence, or learn more about the current state of A.I. philosophy.

In this post I will give you some invaluable pointers to resources you can use today to get a better grip on this new and exciting field of research that is taking the world by storm.

Whether you are a programmer, data scientist, mathematician, or in any other way interested in machine learning, there should be something in here for all of you.