Notes on artificial intelligence, December 2017
Most of my comments about artificial intelligence in December, 2015 still hold true. But there are a few points I’d like to add, reiterate or amplify.
1. As I wrote back then in a post about the connection between machine learning and the rest of AI,
It is my opinion that most things called “intelligence” — natural and artificial alike — have a great deal to do with pattern recognition and response.
2. Accordingly, it can be reasonable to equate machine learning and AI.
- AI based on machine learning frequently works, on more than a toy level. (Examples: Various projects by Google)
- AI based on knowledge representation usually doesn’t. (Examples: IBM Watson, 1980s expert systems)
- “AI” can be the sexier marketing or fund-raising term.
3. Similarly, it can be reasonable to equate AI and pattern recognition. Glitzy applications of AI include:
- Understanding or translation of language (written or spoken as the case may be).
- Machine vision or autonomous vehicles.
- Facial recognition.
- Disease diagnosis via radiology interpretation.
4. The importance of AI and of recent AI advances differs greatly according to application or data category.
- Machine learning and AI have little relevance to most traditional transactional apps.
- Predictive modeling is a huge deal in customer-relationship apps. The most advanced organizations developing and using those rely on machine learning. I don’t see an important distinction between machine learning and “artificial intelligence” in this area.
- Voice interaction is already revolutionary in certain niches (e.g. smartphones — Siri et al.). The same will likely hold other natural language or virtual/augmented reality interfaces if and when they go more mainstream. AI seems likely to make a huge impact on user interfaces.
- AI also seems likely to have huge impact upon the understanding and reduction of machine-generated data.
5. Right now it seems as if large companies are the runaway leaders in AI commercialization. There are several reasons to think that could last.
- They have deep pockets. Yes, but the same is true in any other area of technology. Small companies commonly out-innovate large one even so.
- They have access to lots of data for model training. I find this argument persuasive in some specific areas, most notably any kind of language recognition that can be informed by search engine uses.
- AI technology is sometimes part of a much larger whole. That argument is not obviously persuasive. After all, software can often be developed by one company and included as a module in somebody else’s systems. Machine vision has worked that way for decades.
I’m sure there are many niches in which decision-making, decision implementation and feedback are so tightly integrated that they all need to be developed by the same organization. But every example that remotely comes to mind is indeed the kind of niche that smaller companies are commonly able to address.
6. China and Russia are both vowing to lead the world in artificial intelligence. From a privacy/surveillance standpoint, this is worrisome. China also has a reasonable path to doing so (Russia not so much), in line with the “Lots of data makes models strong” line of argument.
The fiasco of Japan’s 1980s “Fifth-Generation Computing” initiative is only partly reassuring.
7. It seems that “deep learning” and GPUs fit well for AI/machine learning uses. I see no natural barriers to that trend, assuming it holds up on its own merits.
- Since silicon clock speeds stopped increasing, chip power improvements have mainly taken the form of increased on-chip parallelism.
- The general move to the cloud is also not a barrier. I have little doubt major cloud providers could do a good job of providing GPU-based capacity, given that:
- They build their own computer systems.
- They showed similar flexibility when they adopted flash storage.
- Several of them are AI research leaders themselves.
Maybe CPU vendors will co-opt GPU functionality. Maybe not. I haven’t looked into that issue. But either way, it should be OK to adopt software that calls for GPU-style parallel computation.
8. Computer chess is in the news, so of course I have to comment. The core claim is something like:
- Google’s AlphaZero technology was trained for four hours playing against itself, with no human heuristic input.
- It then decisively beat Stockfish, previously the strongest computer chess program in the world.
My thoughts on that start:
- AlphaZero actually beat a very crippled version of Stockfish.
- That’s still impressive.
- Google only released a small fraction of the games. But in the ones it did release, about half had a common theme — AlphaZero seemed to place great value on what chess analysts call “space”.
- This all fits my view that recent splashy AI accomplishments are focused on pattern recognition.
Comments
4 Responses to “Notes on artificial intelligence, December 2017”
Leave a Reply
I’d say pattern recognition and machine learning are only part of what constitutes AI. If siri would only be doing pattern recognition i.e. understanding natural language commands it would not be an impressive app. If self driving cars would only be doing pattern recognition they’d be going nowhere. What is the main differentiator about AI is that machines are actually taking action. Apps are responding to human commands, cars are driving themselfs and so on. This is mutch more than “just” pattern recognition or machine learning.
Pattern matching is the gist of the current state of ML. But ML people are trying and sometimes succeeding with more active algorithms ( which still have very limited uses i.e. do not generalize (yet) ) – Reinforcement Learning, GANs. It is very crude machinery ( more like a steam engine, as F. Cholett called it https://twitter.com/fchollet/status/950604227620950017, and it is definitely not a rocket engine, as A. Ng calls it ).
Here is an example of how RL is used to reduce trading impact:
https://medium.com/@ranko.mosic/reinforcement-learning-based-trading-application-at-jp-morgan-chase-f829b8ec54f2
You will surely like this – ML learned indexes:
https://blog.acolyer.org/2018/01/09/the-case-for-learned-index-structures-part-ii/
Recently a new interesting trend in AI was actualizing. It’s merging/hybridization of DL and symbolic (GOFAI) methods based on search. Core investors stared observing diminishing returns from investments in “pure” DL that is just a complicated pattern matching. Scaling DL beyond pattern matching implies, for instance, logic inference. That is much more like traditional database than matrix crunching on GPUs.
GOFAI failed in 80th because it couldn’t handle uncertainty and had zero-to-none learning abilities on the hardware of 80th. Both these problems have been mostly solved.
I don’t expect penetration of AI into existing databases, especially into OLTP, thought they can start supporting some forms of AI and ML serving current needs of intelligent processing of data stored in DBMSes. Instead, I can predict emerging of new DBMS technologies enabling, for example, probabilistic data structures and supporting probabilistic queries over structured distributions of data instead of tables.
Anyway, AI is data-hungry and it must be managed somehow. So databases will preserve their king role of IT.