Distributed comp

Ghetto Distributed Computing For Neural Networks

A DIY model of how to set up distributed computing for neural networks.

Using an unlikely technique, we can actually set up a very scalable distributed service for neural networks. We will look into setting up a simple javascript include on any website to turn any visitor into a node.

SIGN UP FOR OUR NEWSLETTER

The newsletter will keep you informed of interesting projects in the fast-growing field of A.I. and machine learning.
Of course, we promise we will not spam you, and no shameless self-promotion.

PRO TIP: Highlight any text to share a quote easily!
The idea for this post came, like many of my ideas, after watching a video.
I have been working on a web service that can host various artificially intelligent structures in a way that I can access them from wherever I need in third-party code projects.
Let me give you a simple example.
 
There are three database tables, neural_nets, neurons, and weights. These tables will have the appropriate fields and keys so that they link and work together to form any type of neural network you desire.
Any row in the neurons table will have a layer field for instance, so if you want to go deeper, add more layers.
Are you with me so far? Good.
 
Once everything is trained up, the web service exposes a RESTful API that can be queried to retrieve a neural_net by some identifier field, and it will give you back all the functionality of that particular trained network.
Simple enough.
 
 
But, as we all know, training a neural network can take a very long time, especially with all the quirks of backwards propagation and the likes, and therefor I started thinking about distribution.
 
 
 

 


My initial instinct was to write some software that compiles into a binary executable for various platforms that people can install to become a distributed node for calculation.
It is kind of like how the SETI program works, whenever your machine idles, you give up some of your processing power to the cause, and you will help SETI perform the various calculations they need and send the results back to them.

Then I saw this video in question, I will link it here below.

 

 

 


So what can we use this for?

Training weights is a great candidate first and foremost.
While a new neural net is still in training, clients (or bots) could connect to the web service, query for a piece of training data, and the model it is training for, run the calculation, and return the result to the web service, which will then place it in the right spot in the database.

Synthesizing new training data is a relatively new concept in the field of neural network training, and does pretty much what it says on the tin.
The idea here is to find a way to generate more training data than you have originally, by either combining bits of data you do have, or coming up with new and clever ways to generate new data from scratch.
In an example I heard about, people were developing a handwriting classifier, and one of the methods they used to synthesize more training data was having a script download random images from the internet, opening those up in a word document, and printing a letter on the image in a random font.
Because they new which letter they were printing, they could now tell what the expected output of the training step that used this data should be, while still having a completely new piece of training data.
The results, as I am told, were incredibly solid.


So this would suit itself rather perfectly for distribution, especially because the processes on the Master/Control server would not need to know anything about the synthesizing process itself, all it needs to receive is the newly generated training data, and the expected output, which can be easily provided by the distributed node, once it is ready and sending it to the server.

Of course we can not use the same tactics that are described in the BlackHat video, because we do not want to do anything nefarious, but we sure can set up specific pages on the Internet that people know they can visit to become part of the research, just by visiting the page.
The really interesting thing is that Javascript is actually quite a fast language, so I believe this has real potential to get around some of the problems independent researchers have when it comes to the required computing power for hardcore artificial intelligence research.

Another idea I am toying with is to open up the web service altogether, with some sort of API key structure, and allow anyone who signs up to create their own entities on the Master/Control server, while all the "bot" clients indiscriminately pick up tasks from the database and perform the necessary actions.
This would really democratize the potential computing power this can deliver to our research.

I would love to hear some ideas on this, so if you got 'em, show 'em.

Danny

Follow me on social media to get updates and discuss A.I., and I have recently created a slack team to see if people are interested to forming a little A.I. community,

Twitter  : @ApeMachineGames
Slack    : join the team
facebook : person / facebook group

 

 

Humanai

Human Compatible A.I.: A Myth

While the term "human compatible artificial intelligence" is used a lot by researchers and thinkers in the field, I believe this whole concept to be an absolute myth.

Artificial use of intelligence

The Artificial Use Of The Term "Intelligence"

A great many articles are written every day about everything to do with artificial intelligence, machine learning, deep learning, etc.
Sometimes these terms are used with great care, and applied in exactly the right context, other times not so much.
If you are very familiar with this field, eiter through practice or study, you may already know exactly what I am talking about, and might even be able to correct misuse of terms on your own, but for those of you who want to get a finar grain understanding of what is going on, keep reading.

China ai superiority

China's Plan For A.I. Superiority

So China has made the news recently when announcing their plans to be the world leader in artificial intelligence technology by the year 2030, and I for one am 100% not suprised at all by this news.

Think about it for a second, besides those freaky machines made at the Boston Dynamics labs which recently were in the spotlight—mostly because of the footage of them being kicked around by their creators, while subsequently failing to perform the challenges set for them outside of the lab by DARPA—what other country do you know to be often in the limelight when it comes to advances in robotics and machine learning?

Further more, I suspect any country that has money and human resources to spend on this will have the same goal as China and America, which is to become world leader in the most advanced technologies possible, and this has been going on since the dawn of man.