I have decided to do a multi-part series on this, because the topic is quite dense and complex, and I don’t think we can fully deconstruct this as just one article.
I think there is a need to break it down in a few different topics, which can each explore a part of the implications that come with the idea of regulation in A.I. and M.L.
Before we start, I want to make clear that I have taken the stance of an agnostic observer when it comes to A.I. safety, and do not claim to know exactly how the future will play out.
I also believe that there is nobody in the world, no matter how close your access to cutting-edge artificial intelligence, with the ability to accurately predict whether or not A.I. will become dangerous to us in the future, and even if so, how that danger will manifest itself.
I suggest that you keep this in mind, not only while reading these articles, but also while you are reading anything else on A.I. safety.
The Cutting Edge
There is an obvious reason why people are happily referring to “the cutting edge” of artificial intelligence, without giving any kind of indication what that actual edge respresents, and that reason is that the description would be quite underwhelming.
Some of the more practical, and reasonably powerful, implemenations of machine learning that are out there in the public domain today are generative models, and so-called Q-learning (reinforcement learning) models, which have some impressive results in very narrow fields.
We are talking playing games better than humans, and producing visual imagery in the style of humans.
For example, it is quite impressive that OpenAI recently did, defeating the best human Dota 2 player with their new game bot.
What was especially impressive was the way it was training by itself, as it was basically playing games against itself to constantly improve itself through reinforcement learning.
What wasn’t mentioned in the Tweet that came out from Musk about this, where he gleefully boasts that this victory was way more impressive than the recent Go victory by Google’s A.I., is that the Dota game was severly limited in its normal playing environment.
It only played 1v1, both players could only select 1 character (the same “hero”) with a limited set of attacks.
It represented only a fraction of the complexity of a full team contest, and in a truly fair comparison was not as complex as a game of Go.
What is very interesting here though is that while Musk is very vocal about wanting companies to "slow down" and wait for regulation, his own company is now boasting about a huge advance they have made...
The other big leap forward that we recently experienced in machine learning are genrative adversarial networks, where two neural networks are constantly “fighting” each other to improve themselves.
These networks are used to make things like style transfer possible, the method used to make your photo look like it was painted by Vincent van Gogh.
They are also quite capable of turning crude drawings into more realistic images, or even generate full pictures from just a text description.
These are all real advances, and they should not be overlooked in their importance right now from a functional level, and later down the line as their role played in the development of strong artificial intelligence.
Finally, there are lesser known developments that exist mostly either on paper as concepts, or crude version in labs that are theoretical, non functional, and unproven dreams.
It is these dreams that are fueling most of the fears surrounding artificial intelligence, but they can not be filed under the cutting edge just yet, because they simply do not exist.
Okay, so the idea is to implement regulation in the field of artificial intelligence, and it has instantly proven to be a very devicive topic.
There are a couple of arguments you see popping up time and time again around this.
“Good luck regulating A.I. when other countries will probably not care that you are limiting yourself, while they go full steam ahead.”
This is a reasonably fair point, although we do have a model already in existence and working quite well when it comes to war, the United Nations.
When we talk about the artificial intelligence that will exist in the future, we need to consider this always on a global scale, and understand that no matter what the situations are in the present, this will affect each and everyone of us.
“How could we even expect to control and/or limit general intelligence, its unpredictable nature is not something we could understand in the first place.”
Another very good point, and there is no answer to this at the moment, since the whole field concerned with A.I. safety is currently one big unsolved problem.
Yet, that does not mean there is anything against trying to implement regulation right now, implemented from the bottom up so to speak.
One thing I agree on with Elon Musk: If we have to be reactive in A.I. regulation, it will probably be too late.
Then, I have this one question that I developed on my own, and to be honest I have no real answer for this one at the moment.
If people can say that some code is beautiful, and some code is ugly, does that make coding related to art, or at least a creative process, and should it therefor inherit the rights to freedom of expression/speech?
When I first heard about the statements made by Elon Musk at the Governor’s Association, and read more and more about the evolution of the ideas behind them, two thoughts came through my head.
The first one was around the “freedom of expression/speech” argument against regulation I detailed above, and the second one will probably be equally devicive, especially if you are a hardcore Elon Musk fan.
I know that Musk has established himself as someone with a firm grasp on the technology and theoretics behind artificial intelligence, especially after his little fit with Mark Zuckerberg, but I don’t believe that to be really true.
He is a very smart guy, there is no denying that, and I think he has as much of a grasp on the concepts as one can have without being an outright expert that works in this field on a day to day basis, but that does not make him the right spokesperson for artificial intelligence and machine learning.
This is examplified in the amount of true experts, like Yann LeCun, who have spoken out against him.
I always describe Elon Musk as a businessman first and foremost, and a great one at that, and his goal is to corner markets.
I am not denying that he is for real when he says he wants to change the world for the better, but that still does not take away from the fact that this is a great marketing strategy, and adds to the overall brand that he represents.
And, this brand is not OpenAI, or Tesla, or SpaceX, or the Boring Company, no the brand that Elon Musk respresents is: Elon Musk.
Thus, even though OpenAI is a non-profit organization, if they happen to make some huge breakthrough in general artificial intelligence in the future, the value of all other companies under the Elon Musk brand will profit from this greatly.
Musk is inconsistent, which is why I started to question his intentions even before he came out with his wishes for regulation in a sector that he himself has a stake in.
While I am not basing the following in any facts, and it is pure speculation bordering on conspiracy theory, I wanted to record this anyway, because I have a hunch, and it would be a lot of fun to have that validated in the near future.
I think Elon Musk, who has previously said that he gave the idea of the Hyperloop to the world because he himself did not have time to work on it, started the Boring Company, which is not quite a Hyperloop and so a lot simpler to actually implement, to give governments the gift of solving part of the traffic flow problem.
Traffic flow is directly correlated to economy, and there is nothing governments like more than rising economies.
This would get him a great in to become one of the key figures in the eventual regulatory constructs that are being pushed for right now.
Again, this is all pure speculation of course, but even the name: “Boring Company” is just the kind of thing that Musk would choose to give to a company that is just tasked with a very small and boring part of a much larger overall strategy.
That’s just my opinion, fan boys don’t kill me! ;)
Follow me on social media to get updates and discuss A.I., and I have recently created a slack team to see if people are interested to forming a little A.I. community,
Twitter : @ApeMachineGames
Slack : join the team
facebook : person / facebook group