He has done it again!
Elon Musk spoke, and the world is sharing his vision en masse, with the social medias yet again ablaze with his latest quotes.
So, Elon Musk now views artificial intelligence as an existential risk, and claims that the dangers we are facing are far more prevalent than we initially thought.
And while throwing any kind of criticism at the way of the Musk usually results is quite a lot of hate coming from the fans, somehow he keeps being allowed to make these blanket statements that for the most part are less than adequately examined.
Because we are talking about the man that himself owns a company experimenting with artificial intelligence.
A company who's website has some very interesting quotes of its own, and equally void of any real argumentation or empirical evidence to show for them.
"By being at the forefront of the field, we can influence the conditions under which AGI is created."
This is sure an interesting idea, were it not that for the most part the tools to experiment with machine learning and artificial intelligence are all open-source, so I do not see how one would influence the conditions of how other people and companies apply them.
It seems to me that while they are indeed at the forefront, and capable of inventing the future, eventually like with all the other companies that have open-sourced their tools, innovation will come from many fronts.
"We will not keep information private for private benefit, but in the long term, we expect to create formal processes for keeping technologies private when there are safety concerns."
I don't know what kind of double-speak this is supposed to be, but going onto public forums as large as Elon Musk can walk onto, and claiming that he has never had more concerns about artificial intelligence than right now, don't you think it is going to be a bit too late if they still have to start working on "formal processes" in the long term?
One of the questions we could ask ourselves is this: If Elon Musk wants to create regulations around artificial intelligence, does he want to be one of the regulators?
And, if so, given he has an A.I. company himself, does this constitute a conflict of interest?
The crazy part of all of this is that I think he is right about one thing, artificial intelligence is potentially a real threat, and indeed sooner than most people think.
In my article HowTo: Create A Rogue A.I. (for Dummies), I already showed some ways that A.I. can go horribly wrong, and way before we even reach AGI, ASI, or any kind of singularity.
The threat lies not so much in intelligence, but in over-connectivity to real-world environments.
Personally, I don't think you can ever regulate this industry, and neither could you ask companies and researchers to just "slow down a little," like Elon Musk is suggesting.
There is a real need to come together and think about safety protocols that are not only built right into the algorithms and systems themselves, but also in the way we connect these to the real world, and this is going to take a lot of time.
Basically, it is a crap-shoot at the moment, and we are all going to be affected by it. Whether it will work out for or against us, is really anyone's guess right now.