Deep Reinforcement Learning: Curiosity driven Super Mario

I have used Deep Reinforcement Learning with Curiosity-driven Exploration (see https://arxiv.org/pdf/1705.05363.pdf) to train an agent playing Super Mario in the OpenAI GYM for Nintendo NES emulator. The untrained Mario is obviously the one on the left side. The input data for the agent are the raw pixels. The environmental rewards (i.e. the value which the agent tries to maximize) is the game score.
I ran the training in a Docker container based on the latest pytorch/pytorch image with some adaptations for the graphics output. My starting point was the example source code from the MEAP book “Deep Reinforcement Learning in Action” by Alexander Zai and Brandon Brown, which I highly recommend. See https://www.manning.com/books/deep-reinforcement-learning-in-action and click on “Source Code”. The training took less than an hour on a 4 core i5 @2,9 GHz, 16 GB memory, NO GPU involved. It is a little scary to realize, how far one can get with relatively modest computational resources in such a short training time.

Advertisements

What the German government’s national strategy on Artificial Intelligence should look like

Better late than never: In late July, the German federal government has published a cornerstone paper on a “strategy artificial intelligence”. The details of the strategy will be worked out until November and presented at the Digital Summit in December.

There will be a lot to do after the summer pause. The paper, in it’s current state, is mostly generic, and very little commitment shines through. When you read it to the end, you’ll find a silver lining in the very last section “Immediate measures of the Federal Government” (my translation):

Keeping and retaining AI experts in Germany has immediate priority across programs and policies. Networking and expansion of competence centers with France will be implemented without delay.

If only these two bullet points translate into tangible action, the paper would be worth the paper. Yet is does not define a strategy. The paper as a whole looks more like a concatenation of brainstorming items with no clear direction.

Some of the cornerstones seem to be unrelated to the topic of AI. For example the idea to invest in infrastructure. Sounds good, but what does it mean in the context of AI? Building a national GPU cloud?

So let’s see, if we can do better. Let’s define a better AI strategy for Germany.

A better AI strategy for Germany

Global context

First, let’s look at the global context, because it makes no sense to position ourselves when we don’t know where everybody else is standing.

China

When AlphaGo beat Lee Sedol in 2016, we felt with the Korean people, who had to witness a national idol losing against DeepMind’s machine. What I didn’t realize then was, that for China, this was nothing short of a Sputnik shock. An ancient Chinese game that requires intellectual brilliance, strategic planning, experience and intuition to win, was suddenly won by a British (of all nations) team of a few young people and their computer. It was apparent, that a technology with these powers can not only be used for playing games. And this came at a time when the West was more then ever reluctant to share advanced technology with China.

So Beijing committed itself to become the global AI leader by 2030. China has earmarked hundreds of billions of dollars for collaboration with its existing tech leaders and to encourage the rise of unicorn startups. Last year, the State Council’s release of a national strategy for AI development channeled and focused existing initiatives and made them a national priority.

Have a look at FHI institute’s paper Deciphering China’s AI Dream for more.

USA

Today, the United States are by all means leading in the field of artificial intelligence research, and they also have introduced the most policy reports on AI strategies. It is difficult to say, if the current federal administration is willing to implement any of these strategies, and if so, how long the political climate will allow researchers and engineers to move forward at a meaningful speed.

From the outside it looks like the administration prefers leaving the field mostly to the private sector. When it comes to the industrial application of AI, this corresponds well with the overall political direction. AI innovations can play a key role in concert with other policies. For example: After pulling back from globalization and pushing a part of the workforce out of the country, AI will have to play a major role in replacing manual labor.

But with a few exceptions, automation is not the area of artificial intelligence, that American innovators are best at. The borders of the field are pushed by companies that will keep targeting the global market with disruptive services, mostly for end users. The fact that these companies have their headquarters in the United States does not automatically create a competitive advantage for the nation, beyond increased tax revenues.

At the end of the day, the ability or inability of Washington to implement a concise strategy might not even matter, because the United States have other federal bodies, that operate with relative independence and have proven in the past that they can efficiently help out in in these situations. DARPA with it’s annual 3 billion dollar budget and great track record on strategic investments, is an outstanding example.  Less visible but probably not less effective are the efforts of the intelligence agencies to utilize AI for their specific needs.

European Union

In 2013, the European Union proposed the 10-year Human Brain Project, which is still the most important human brain research project in the world.

A year earlier, in 2012, the European Commission decided to initiate a Public-Private Partnership in Robotics, later named SPARC. Driven by an aging population and little access to cheap labor, manufacturing companies in the EU traditionally are under more pressure to automatize, than companies in China and the US. Hence robotics is taken very seriously in the EU.

Japan

Japan has an even sharper focus on robotics, mostly for the same reasons as the EU. It has, with a margin, the most robot users, robotics equipment, and service manufacturers in the world.

France

In March, Emmanuel Macron outlined France’s national strategy for artificial intelligence. The government will spend 1.5 billion euros over five years to support AI research, encourage startups, and collect data.

In a Wired interview, Macron discussed the reasons behind the initiative. While you can read some fear of missing out between the lines, one central goal seems to be defending European values and the way we live. When a technology shapes every aspect of our lives as AI does, it’s best to be involved in shaping the standards that govern this technology.

Paris is already an AI Hub with labs of some of the biggest players.  With regained self confidence after winning the soccer world cup, and under considerate, sober-minded political leadership, it looks like France will enter the club of global AI leaders rather sooner then later. Either with or without the rest of the EU.

United Kingdom

With the chaos surrounding the Brexit, it is hard to say if the UK will be able to execute consistently on a national AI strategy, or any strategy at all. But it certainly has enormous potential. DeepMind, Swiftkey and Babylon all started in the UK.

In April, the UK government has presented something like a national AI strategy in a quite detailed policy paper called the “AI Sector Deal“. The point of this deal is, to establish a strong partnership between business, academia and government. The objective is rather bold:

 A revolution in AI technology is already emerging. If we act now, we can lead it from the front. But if we ‘wait and see’ other countries will seize the advantage. Together, we can make the UK a global leader in this technology that will change all our lives.

Also the goals do not exude unnecessary modesty:

  • AI and Data Economy – We will put the UK at the forefront of the artificial intelligence and data revolution
  • Future of Mobility – We will become a world leader in the way people, goods and services move
  • Clean Growth – We will maximise the advantages for UK industry from the global shift to clean growth
  • Ageing Society – We will harness the power of innovation to help meet the needs of an ageing society

India

In June, an Indian government think tank has presented the countries AI strategy in the form of a discussion paper. Among other things, the paper discusses the possibility to use AI for social inclusion and to position India as an AI hub for the developing world.

Others

wp-1518767221682.jpeg

Sophia, Saudi Arabia’s first robot citizen, gave a speech at the pre-opening of the Munich Security Conference earlier this year.

Many other countries are in the process of implementing their AI strategies with full steam. The UAE have a Ministry for AI, Saudi Arabia has at least one robotic citizen. A good and regularly updated overview on national AI strategies can be found in Tim Duttons Blog.

 

Timing, Pace and Direction

Germany is a little late in this.

Here is the good news: Consolidation has not even begun. The development is so fast, that it does not matter much, if you are behind today. We are currently in the qualifying phase, where nations, blocs and organizations compete for the Pole Position in the much more important race for the best utilization of AI. The price of this is a short but abundant economic and political dominance, that will be used by the winner to shape societies and the global political landscape for many years. This race will be won by the region that want’s it most. And currently this seems to be China.

But not all contestants in this race run in the same direction, so it is unclear, if we even have a race and what the criteria for winning are. While the Silicon Valley has a focus on creating science fiction technology to create new markets and disrupt others, China will work on using AI for efficiency gains to stay competitive in their existing markets and probably on intelligence and military applications to keep their trade routes open. Sub-Saharan Africa on the other hand will keep looking for innovative ways to provide public services without first building up the expensive 19th century infrastructure, which serves as a basis for these services in other countries.

It makes sense that every region focuses on solving their particular problems first. While it seems inevitable, that countries and blocs compete in building up AI capabilities, they don’t necessarily need to to compete in the development of specific technologies or certain applications of AI. In the past, learning from each other has mostly been a good practice with new technologies. With AI comes a new twist: now even our systems can learn from each other.  And they will, even if we don’t want it. There is currently no feasible IP protection for machine behavior, for the same reason that there is no way to stop monkeys from copying each others behavior. Nonetheless an implicit protection exists: When an AI system solves a problem that others don’t have, there is little incentive to copy it. There is also indirect protection: When my AI system finds the formula for an active pharmaceutical component faster then your AI system, I can protect this result with the established procedure.

Geopolitical situation

One consequence of America’s shifted priorities is, that the western world as a whole is without direction, and so far, it is doing a terrible job in finding a new leader.  It also has become clear, that China is neither willing nor able to assume responsibility for the world order as fast, as the United States is giving it up. The western hemisphere and to some extend also the rest of the world needs a replacement for America’s leadership. Germany has always been particularly vulnerable to geopolitical chaos.  While meandering in the growing maze of political fragmentation, Germany at least needs to coordinate new policies and strategies with it’s neighbors, to create some coherence and stability. Beyond that, in order to find a new order for the West, we need to develop the ability to give trusted neighbors the lead in important topics.

Better Cornerstones

With the groundwork in place, let’s now see what the cornerstones of a German AI strategy should be:

  1. European cooperation, especially with France
  2. Quantum AI
  3. Edge computing
  4. Empowering the individual
  5. Grand Challenges

European Cooperation, especially with France

Cooperation with France was already in the original paper, and there seems to be some progress. It is by far the most important point for the reason, that president Macron has mentioned in the Wired interview: To defend European values we need to participate in setting the standards, and this can not be done with regulations in an effective manner (otherwise our public spirit would be in much better shape, for we have no shortage of regulations). It must be done by shaping the technology, establishing facts by creating useful systems and promoting them, so that they become de-facto standards when others start using them.

This concerns our freedom and the way we live. It is absolutely essential that Europe is united in this because no single European nation, not even France, is even remotely on par with China and the United States at this time. If France goes ahead in this quest, Germany should make it it’s top priority to provide every possible support.

Quantum AI

The second largest strategic mistake we can make, is a not so obvious one: Not providing  students at technical universities with access to quantum computers. The match of quantum computing’s opportunities to AI’s problems is so good, that as soon as “quantum supremacy” is reached, it will have an even greater impact on AI then on cryptography (and the impact on cryptography is expected to be drastic). Also the impact will come sooner, because for AI applications we don’t have to wait for the quantum computing community to figure out error correction to a quasi deterministic level, as we have it in classical computers today. For cryptoanalysis this is essential. For neural network training it is not.

I did not read anything about quantum computing in the cornerstone paper at all. A national strategy should at least consider the opportunities that might open up for AI; especially in a country that is left behind in supercomputing, but is at the same time is the home of Werner Heisenberg and Max Planck.

Edge Computing

Even without AI, Edge Computing (beyond Industry 4.0 scenarios) should be a national priority for rather profane reasons like poor internet connectivity and expensive data plans. Add AI and data driven business models to the picture, and it becomes clear, that Edge Computing solves a whole pile of problems that are specific to Germany. To pick out the most obvious one: Strict privacy regulations make it hard for businesses (and impossible for small businesses) to offer cloud based data driven services. But when personal data stays on premise at all times (because the relevant data processing happens on the consumers site), a whole new world of innovative services becomes possible, without subjecting the people, who offer these services, to the prospect of draconian punishments.

Moving AI workloads from the cloud to the edge introduces changes that needs special consideration.

  • Training deep neural networks can require a lot of computing power. Moving high performance computing capabilities to the edge would be a waste of resources, if they are only fully used for short peak loads. Research and product development that works towards a smooth utilization of edge resources should be supported. A priority should be use cases, where either a high but even load is put on edge nodes naturally, for example deep reinforcement learning with live data, or where neighboring nodes can sell idle resources to nodes that temporarily need stronger capabilities, for example based on IOTA.
  • Machine learning can be power hungry too. Moving workloads out of centralized data centers closer to the data works well together with decentralized electricity production from renewable sources. It reduces the amount of electricity that needs to be transported to the industry hotspots.

After years of shifting all relevant computing into the cloud, Edge Computing is a paradigm change. There are many obstacles to overcome, but most are technical and will be solved fast as soon as people start seriously working on it. Talent and money seem to be in place for this to happen, but it will lose momentum fast, when it turns out, that a blurry ambiguous legal framework puts the protagonists at risk.  To make Edge Computing happen, the German government has to make sure with clear, concise, reality aware regulations, that misguided jurisdictional aberrations are kept in check, so they don’t keep end users from using Edge devices.

Empowering the individual

eIoXsRujKQtLAzTiqfGXMGi5CZ_dJzhWBj7K-gQCtlQpX92IBThe perspective taken by the cornerstone paper is very much top-down. It talks very little about empowering people. Of cause, it is crucial to attract an elite and create an ideal environment for them to work in. But we should not stop there. The field of AI is vast and widely unexplored, with plenty of room for surprises. People here tend to have a broad and solid education, even those without a Data Science PhD. This is a great resource that we should tap into, if we want to get ahead. When Germany promotes people science, when German companies encourage their employees to use the available tools and implement AI solutions within their own realm of expertise, with their existing data, to solve their own local problems, then we will very soon have a broad adaptation of AI technology made in Germany through all industries and areas of society.

If we don’t do that, most of these solutions will come years later (when one of the few experts finally has time), or organizations fall back to standard products or cloud solutions, that won’t give Germany any competitive advantage.

Grand Challenges

Grand challenges define the areas that we want to push forward with special rigor. They represent the most pressing problems we hope to solve with AI technology. We are looking for the best possible solution and tender high rewards for it. These problems are:

  1. rL-S2wiYPVyjE_MJjB7dxa4JKZS_F0VtmAwpym1WHiUpX92IBAgile Intrusion Detection: Detecting hackers and dangerous software early is important for organizations as well as for individuals in a world where cyber warfare becomes more and more a regular tool of robust diplomacy, and nation states carry out direct attacks against private entities. To be able to protect personal data and trade secrets efficiently, European businesses need intelligent shields that detect and stop complex attacks with near to 100% accuracy. It is a huge design flaw in the GDPR to allow EU member states to just dump the responsibility for this part of cyber defense on the first line of victims in the crossfire of coordinated attacks: those private entities, that happen to work with personal data. Barber’s shops and soccer clubs are not in the business of cyber defense. Nation states are. The states should build and provide the tools that support protecting peoples’s data from attacks of criminals and other nations. These are defensive weapons of modern warfare and it is the responsibility of states to develop and deploy or provide them. Highly accurate intrusion detection and prevention is also an important puzzle piece for making Edge computing a success, because people will rightfully deny investing in devices that put them at risk. Computers on the edge need to be able to defend themselves against unforeseen attacks in a nimble and adaptive way. This can only be solved with an advanced combination of AI techniques and Germany needs the best possible solution, so the German government should make it a top priority to get the best talents in the field to work on this task.
  2. Reliable information and democratic consensus building: Fine granular political campaigns and micro-targeting lead to ever more polarization and radicalization even within homogeneous groups of people. When similar people who should have similar interests are systematically presented different facts and are shielded from other facts, these people diverge from each other in a way that undermines social cohesion and the fabric of democracy. The traditional mass media has proven to be ill equipped to curtail this development. To detect coordinated disinformation campaigns before being sucked into them, people need a tool that is much more personalized than mass media can be. We need easy-to-use instruments for each citizen to quickly check facts and put them into perspective at the moments they are presented to her. 85 years after the introduction of the Volksempfänger, it is time to introduce a device to deflate propaganda. This service needs to be free from commercial interests and political influence. It should operate as automatically as possible, but needs an independent controlling body to keep machine bias in check. And first of all, it needs to be created. That is what the second grand challenge is about.
  3. Next Generation Personal Agent: It is already becoming difficult to imagine a world without smart personal assistants like Amazon Alexa™, Google Assistant™, Microsoft Cortana™. They are extremely useful in organizing daily private life and are becoming better daily. They are also a picture book example of the principle agent problem.  When these agents are asked to perform an action that contains a conflict of interest between the owner and the service provider (i.e. Amazon, Google, Microsoft), it will tend to act in a way that resolves the conflict in favor of the service provider. We need a device that offers similar services as smart assistants, but is able to learn to make decisions in favor of the owner. Since this device needs to adapt to the owner much better than existent smart assistants, the learning should be less centralized than it is in current solutions. Ideally the device should be able to use it’s own computing capabilities for the training process. This also allows to use sensitive private and personal data for the training, without sending it to external service providers. The goal should be to have an electronic personal agent that the owner trusts enough that she does not need to control it’s actions.

 

The Force Awakens: AI and Modern Conflict — #MSC2018 warm up

The 54th Munich Security Conference had an unofficial pre-opening yesterday, with only a handful of the formal attendees and a public panel discussion about the upcoming role of AI in modern warfare. The panelists represented political and military entities and one NGO. This composition distinguished yesterday’s event from a technical conference in a way that was at the same time delightful and disturbing.

The most notable contributions came from the two and a half women on the stage. Kersti Kaljulaid, president of Estonia, offered some advice on how the executive might be able to contain the development of rogue AI. Her proposals filled the whole spectrum from helpless actionism (monitoring energy use, apparently hoping that the developers of bad AI don’t use cloud resources) to pragmatic and feasible, but generic approaches (build a blockchain based marketplace for whistleblowers to generate leads to malicious operations by their own flakey members).

Mary Wareham of Human Rights Watch coordinates the “Campaign to Stop Killer Robots”. She used the discussion to draw the attention to the question, what can be done by international agreements to prevent development and use of fully autonomous lethal weapons in warfare. Given the scope of the conference (and the fact that many of the folks involved in this discussion have to rely on second hand information when it comes to technical capabilities), this seems to be the only question really leading anywhere.

wp-1518767221682.jpeg

And then there was Sophia. She never held a public office or exerted much influence on international matters. But she is the first robotic citizen of Saudi Arabia, and she delivered the opening speech  of the day. Without an active role in the panel, she spent the rest of the event at the speaker’s desk, and it was quite entertaining to watch her (probably unintendedly) shaking her head when certain topics came up.

The other panelists were Darryl A. Williams, Lieutenant-General, Commander of the Land Forces of NATO, and

Anders Fogh Rasmussen, former NATO Secretary General. The moderator was NYT columnist David E. Sanger.

Applied AI with DeepLearning, IBM Watson IoT Data Science Certificate

I’ve just (literally minutes ago) completed “Applied AI with DeepLearning, IBM Watson IoT Data Science Certificate”. It is a very well prepared course by IBM — mostly by the very nice people of the Munich Watson IoT Center 🙂 and also some important portions by Skymind, the awesome creators of DL4J —  delivered through Coursera.

The course covers a lot of ground in a very short time. Details get lost at this speed, so if you look for a deep understanding of AI, you will be happier with some of the offerings of academia. But if you look for a refresher or an update on industry trends, this course is for you. Even more so, if you are an industry practitioner with a software background, and need to come up to speed on AI.

Here is the link to the course. If you have more time, and look for a solid foundation, I recommend Andrew Ng’s “Machine Learning”. Of course there is nothing to stop you from taking both courses…

 

 

Deep Reinforcement Learning for Bitcoin trading

It’s been more than a year, since the last entry regarding automated Bitcoin trading has been published here. The series was supposed to cover a project, in which we have used deep learning to predict Bitcoin exchange rates for fun and profit.

We have developed the system in 2014 and operated it all through the year 2015. It has performed very well during the first 3 quarters of 2015, … and terribly during the last quarter. At the end of the year we have stopped it. Despite serious losses during the last three months, it can still be considered a solid overall success.

I have never finished the series, but recently we have deployed a new version, which includes some major changes, that hopefully will turn out to be improvements:

  • We use Reinforcement Learning, following DeepMind’s basic recipe (Deep Q-learning with Experience Replay) from the iconic Atari article in Nature magazine. This eliminates the separation of prediction and trading as distinct processes. The inference component directly creates a buy/sell decision instead of just a prediction. Furthermore the new approach eliminates the separation of training and production (after an initial training phase). The neural network is trained continuously on the trading machine. No more downtime is needed for re-training once a week, and no separate compute hardware is lying idle with nothing to do for the other six days of the week.
  • We use Deeplearning4J (DL4J) instead of Matlab code for the training of the neural network. DL4J is a Java framework for defining, training and executing machine learning models. It integrates nicely with the trading code, which is written in Java.

This will change the course of this blog. Instead of finishing the report on what we have done in 2014, I am now planning to write about the new system. It turns out, that most of the code we have looked at so far, is also in the new system, so we can just continue where we left off a year ago.

Data Re-Coding 1

Previously on BigNotesOnPersonalDataScienceTheory: Leonardo, being the great experimentalist that he is, has been collecting Bitcoin price data for weeks, while Sheldono has figured out, why the price prediction with artificial neural networks should work in theory. Now they need Howardo: to cobble the theory and the data together into something, that is actually usable in the real world. Meanwhile it remains unknown, what Rajesho is up to.

Howardo’s job is difficult. Leonardo has provided him with endless time series data from the past. Sheldono gave him an extended, patronizing, dismissive and snotty lecture on how easy it is, to determine, whether or not some limited, rather static data matches some idiosyncratic pattern in the present. And as an output, everyone expects a clear confident prediction of the future. It is clearly impossible to get from the input of his friends to the desired output.

DSCN2853

Poor Howardo! His slight despair turns into utter panic, when he learns, that — of all people in the world — you have been assigned the task of helping him to sort things out.

You recognize, that Howardo has two distinct problems to solve:

  1. Convert the time series data to a format, that can be used as an input for the neural network. Like the webcam picture from the previous post, it should have a fixed size and should come as a coherent block of data, and not as a continuous data stream.
  2. Turn the pattern matching result into a prediction of the future.

“Wow, I didn’t notice that, thank you (RollEye)”, says Howardo, “but how are you going to see the future based on the matching result — without a crystal ball?”

“Well,” you say, “that’s easy. I did this for my boss before.” (Howardo gasps of relief).

You add: “It was a disaster” (Howardo hyperventilates).

Turning the pattern matching result into a prediction of the future

Let’s reconsider: what went wrong with your bosses trend prediction? The basic idea does not seem to be wrong: You recognize a trend and base your action on the assumption, that the trend can be extrapolated.

This works great in daily life and is the reason, why we are able to walk without falling over, to recognize when it is a good day to take an umbrella to work, and to avoid snowballs that people throw at us. All without a crystal ball. All learned by the neural network between our ears.

Your bosses artificial neural network was really good in recognizing a straight line, but failed miserably in predicting the future because the prediction was based on the wrong assumption, that a straight line is a good predictor of future price increases. As it turns out, it is not.

Your boss has only considered a chart of a whole trading day, which means, that he probably bought and sold his shares just before the closing bell. What happened in the early trading hours is rather irrelevant at this point. The overall trend of the day (if there was any) is replaced by a new trend that is fed out of the anticipation of what happens overnight in other markets.

If your boss had thought it to the end, the filter would have probably looked not like this:

btcBlog8x8FilterHeatmapUptrend

Instead it would be more like this:

btcBlog8x8FilterHeatmapUptrendOverweightLaterHours

Chances are, that with this filter as the intermediate layer weight matrix of the neural network, your boss would have earned some money. But the prediction performance still would be far from great. It would just be a tool to point out, what is already obvious. Instead of a bad predictor, we now have a mediocre predictor.

This is the point, where intelligent design reaches it’s limit. In order to make better predictions, the weights in the weight matrix must be learned from real data, and not set by you. You must get the neural network to adapt in response to success and failure, much like you learned to predict, if a snowballs trajectory ends in your face or not — from failure and success.

“Howardo”, you hear yourself saying, “we must feed the learning algorithm with the first half of the price’s trajectory. We let the neural network predict, if the trajectory leads to a pleasant impact spot or not. When it is wrong, we will punish it”.

Howardo comes back to life. “That sounds like fun” he says. You are not sure, what part of your proposal he refers to, but it’s probably the punishment part.

For reasons that will become apparent in a later post, you decide to call the partial price trajectory a “feature vector” and the information, if the resulting price is pleasant or not, “class label”.

With this insight, it becomes easy to define a data format, that is suitable for training the neural network:

  • The feature vector is the content of a sliding window that you pull through Leonardo’s historic Bitcoin price data. For example you could say, that for each point in time you read the previous 1000 price samples out of the time series into a feature vector.
  • For the class label you have to peek into the future. Thank goodness, time is relative and from your point in spacetime, the near future of all of Leonardo’s price samples is also in the past. So you decide that for each sample in the time series you compare the price with the price 10 samples later, which accords to “10 minutes later”. If the later price is higher, you choose the class label “pleasant”, otherwise the class label “unpleasant”.

When your neural network is finally fully trained with this data, it will still not be able to look into the future. But it will hopefully be able to classify an aggregation of 1000 consecutive price samples as member of a class of price trends, that — with a certain likelihood — will lead to a higher price 10 minutes later. And that’s all we can hope for.

This brings us to a little taxonomical oddity. In much of machine learning literature, “prediction” seems to be used as a synonym for “recognized class”. This leads to funny statements like “the classifier predicts that the picture shows a cat” — after the classifier has processed the picture of the cat. Better get used to it…

In the next post, we will examine the actual Java code for the data re-coding.

 

Why Neural Networks work

One popular explanation of the fact, that artificial neural networks can do what they can do, goes along these lines:

  1. A brain is capable to do these things.
  2. An artificial neural network is a simulation of a brain.
  3. Therefor an artificial neural network can do these things, too.

20160222_091412.jpgAdmittedly, most things in computer science that “work” in the sense, that they produce useful output for the real world, are implementations of theoretical models that have been build by other sciences, so this is kind of a valid explanation. I don’t like it anyway.

My first problem with it is this: it’s not quite true, that an artificial neural network (ANN) is a simulation of a brain. To be fair, some  come impressively close. But in the context of this blog we unambitiously restrain ourselves to the level of sophistication that we find in most real world ANNs, which are radical (radical!) simplifications of even the simplest natural neural networks.

Second: it does not help most people to understand, why the ANN is capable of doing useful work. Unless you already understand the brain, it won’t help you much, when I tell you, how we are going to map gray matter to mathematical concepts. (And if you already understand the brain: Welcome to the blog. You can skip the rest of this post, if you want.)

I want to come from the other side, and approach the topic as an engineering problem. Buckle up. We are going to manufacture a special purpose classification machine, and then (in a later post) we will generalize it and see if the result has any similarity to what the neuro sciences know about the brain.

As a basic motivation, let’s assume, that your boss has found a webcam that shows a stock market chart (like this), and came up with this brilliant idea: he will become insanely rich with a new software, that reads the chart and outputs some kind of likelihood, that the market is in an upward trend. Your boss calls this likelihood “Zuversicht”, and we are going to stick with this term for a while, because we like German words, and because the corresponding English word (“confidence”) already has a certain meaning in statistics, and we want to prevent confusion resulting from ambiguous terminology.

Ok, now our input is an image from a webcam, so we have a two dimensional array of pixel colors. To make it easier, you convert the image to grayscale, so you only have to think about the pixels’ brightness and ignore hue and saturation. You look at some examples of upwards trends and can’t help but to observe, that the lines tend to start in the lower left corner and zigzag their way to the upper right corner.

btcBlog2x2FilterHeatmapUptrendStrictWithChart

Breaking down the image in quadrants, you notice, that in these cases the average pixel brightness in Q2 and Q3 is higher than in Q1 and Q4. With this insight you write the following lines of code and declare your job done.

double zuversichtUptrend(double[][] image){
  int imgLenth = image.size();
  int imgWidth = image[0].size();
  double averageBrightnessQ1 = Math.avg(Arrays.subarray(image,0,imgLenght/2,0,imgHeight/2);
   ...
  return averageBrightnessQ2 + averageBrightnessQ3  

}

It works great on the test data, your boss is happy and his boss gives him a raise. But a few weeks later, he tells you, that he’s not happy anymore. He has not become insanely rich!

What went wrong?

Apparently, your program has mis-classified the trend on several occasions. So you have a look on the chart images for these days, and see two major flaws of your approach:

btcBlogChartWebcamFilterTooCourse

  1. On some days, the chart went almost flat or turned back to negative. The chart was just low enough in the early hours to run through Q3 and just high enough in the later hours to run mostly through Q2.
  2. On other days the chart went clearly down, but your software’s Zuversicht value was very high. The reason turns out to be, that the overall brightness of the picture was high on those days, illuminating Q2 and Q3 without the chart line covering much space in them.

So you start the second iteration of your engineering endeavor.

To solve problem 1, you obviously need a higher resolution. Let’s try 8×8! This partition conveniently allows us to identify each field with a chessboard notation.

btcBlog8x8FilterHeatmapUptrendStrict

A perfect upward trend, wich your boss defines as a straight line from the lower left to the upper right, will result in the fields A1, B2, …, H8 lit up and the other fields remain dark. The flat chart from problem 1 will rather lite up the fields A4, B4, .., G5, H5. Great, but what about all the other possible charts that show a trend that goes upward in a non steady, somewhat chaotic fashion? This is, after all, rather the norm then the exception.

btcBlogChartWebcamFilter8x8Strict

Let’s add some fuzziness to the system.  The intuition is like this: For each field you guess the probability that the full chart shows an overall upward trend if the particular field is lit. For example, if the lower right corner (H1) is lit, the probability of an overall positive trend is zero. If the field left to it (G1) is lit, the probability is close to zero, but there is still a possibility, that the chart makes a radical upward turn in the remaining 1/8 of the chart. The closer you get to the perfect upward trend, the higher the probability becomes.

btcBlog8x8FilterHeatmapUptrend

You call the resulting 8×8 numbers a “weight matrix”. You can use it as a filter for the actual chart images by doing the following:

  1. For each field of the loaded picture you multiply the actual average brightness with the corresponding value in the probability matrix. The product will be a high value, when the average brightness is high and the value in the probability matrix is high. Otherwise it is a low value. You repeat this step for each field, 64 times altogether
  2. You add up all the  products.

The closer the actual chart zigzags around the ideal chart, the higher the sum will be. But even when the actual chart goes astray: if it remains on a positive trajectory, we will get a relatively high result in this calculation.

So far so good. Let’s  look at the second problem: the tide lifts all boats and the ceiling light lights up all pixels. When someone turns on the light in the trading room, all pixels in the Webcam picture become brighter. Even areas that are not trespassed by the line chart seem brighter, which renders our filtering result worthless.

Let’s add a preprocessing step to fix this.  If there was no line chart in the picture, all pixels would have approximately the same brightness and they were supposed to be black (brightness zero). If in this case, you would subtract the overall average brightness from each fields measured brightness, the result would be all black fields. Subtracting the overall average brightness normalizes the picture to what it is needed for our further processing.

Now add the line chart to your consideration. Because it covers only a very small fraction of the image, it does not change the overall average brightness too much. The light noise that illuminated the dark parts of the image, also made the bright parts (the lines of the chart) brighter. So if we subtract the average overall brightness from the bright pixels, we also normalize those parts of the image to what is expected as an input for the next processing step.

Great, now you know what to do to solve problem 2. Question is: How do you do it. Wouldn’t it be great if you could implement both processing steps in a unified way. In other words: is it possible to define a weight matrix in such a way that when we apply it to the input data, the average overall brightness subtraction of your preprocessing step is executed. Turns out: it is possible.

Imagine the following weight matrix for field A1:

  • Value at position A1: 1-1/64
  • Value at all other positions: -1/64

Please convince yourself, that this Matrix will do the average subtraction for position A1. Of course, this works just as well for all other positions.

btcBlogChartWebcamFilter8x8PixelNormalization

Hmmm, interesting, you just solved two seemingly totally different problems with the same approach. It feels a little odd to define a huge matrix for a calculation that could easily be done procedurally, but you have a feeling, that there might be a systematical advantage in a unified way to tackle problems in this project. Also, of course, you know, that vector (and with it matrix-) calculations are the strongpoint of GPU data processing as well as highly optimized Software packages like Matlab (“Matrix Lab”!) and Octave. You feel that after your initial success, your boss might become greedy, which will ultimately put more load on your software. Having some strong performance afterburners like these in your arsenal, might come out handy later.

Your overall process has three steps now:

  1. You create an 8×8 matrix from the image data as input data layer. (To facilitate vector operations, you “flatten” this matrix to a vector of lenght 64, but that’s an implementation detail).
  2. For each field you apply the corresponding preprocessing 8×8 weight matrix to whole input layer 8×8 matrix. The result is a new 8×8 matrix, which you call the “hidden layer“. (And in the real world, you would do this again with “flattened” vectors and a large 64×64 weight matrix representing all fields. This is mathematically equivalent and can be well parallelized. Again: just an implementation detail)
  3. You apply the classification weight matrix an the hidden layer and get the Zuversicht value as output.

 

There you go: without thinking much about neurons, synapses and ganglia, you have handcrafted your first artificial neural network. Your new software is actually what people call a Feedforward Neural Network with a linear activation function. When you define a threshold value for the Zuversicht output, you also have a binary linear classifier.

Your neural network is still far from being perfect. You will eventually get there, but not today. Lets just mention a few things that you would need to think about before going into production:

  • It is not able to learn! It works because you were able to provide a “model” (that is the weights in the weight matrices). This is good enough for now, but for the future we prefer to let the computer do the work of figuring out the model data.
  • It is not well protected from eccentric input data. Imagine what happens, if a camera error or a data transfer problem produces for a single pixel a value of  325212498434 instead of a value in the expected range between 0 and 1.
  • It will still fail to make your boss immeasurably rich, because it does not predict anything. It only classifies a chart as close enough to your bosses definition of a perfect chart. This is, what he wanted, so it is partly his fault. But we nevertheless can do better.

Even with these shortcomings, you hopefully have built up some comprehension as to how a neural network is able to recognize a pattern. We have seen that, even without actively imitating nature, we get to a similar result, when we just work our way to the best solution in a straightforward manner.

A little heads-up: In the next post, we will build the software to convert the collected Bitcoin price and market data to a format suited as an input data layer for a neural network like this. If your data collector from the previous post is not running yet, please start it soon to have some data to play with next time.