AI in the modern day has become a buzz word. No company, be it in technology space or not, wants to be left behind and be in a state of FOMO. Interestingly from a buzz word status, Mariya suggested
AI is like teenage sex: everyone talks about it, nobody knows how to do it, everyone thinks everyone else is doing it & so claims to do ithttps://twitter.com/thinkmariya/status/849476338331373568?s=20
That being said, it has been proven that in most cases, you should not be using AI to solve your business problems. In many cases it is just fine to go with your rule engine. Most rule based expert systems can be used where rules and logical outcomes are relatively clear. These are a series of production rules very similar to the if-then statements which would govern how the program would infer and access the knowledge base.
Even though your technical team might be suggesting the use of AI. There is almost a 99% surety, you would not be doing AI.
Google suggests going through a step by step process by first identifying the workflow for which you think, you would need an AI solution. Still, keep in mind that when we say AI, it is not real AI, more on that later.
Once you identify the aspect you want to improve, you’ll need to determine which of the possible problems require AI. Which of these are meaningfully enhanced by AI. Which solutions don’t benefit from AI or are even degraded by it.
It’s important to question whether adding AI to your product will improve it. Often a rule or heuristic-based solution will work just as well, if not better, than an AI version. A simpler solution has the added benefit of being easier to build, explain, debug, and maintain. Take time to critically consider how introducing AI to your product might improve, or regress, your user experience.
The situations which are better suited for AI would fall into one of the categories which are better solved by the tenets of Machine Learning and Deep Learning. These include
- Recommendation engines which are domain specific
- Clustering or bucketing on the basis of similarity
- Detection of events which have low occurrence but over a large period of time there is significant data (eg credit card fraud)
The Myth of AI
Let us demystify the AI jargon a little more.
What we understand by AI or whatever we are made to understand is not entirely true. Due to the popularity and hype, any kind of computer analysis and automation is being touted as AI. Real AI is human level intelligence. This is capable of abstracting concepts from limited experience and transferring knowledge between different domains. What we have today is very narrow and weak AI, if we can still use that term. DeepBlue and Alpha Go though being touted as AI successes have a very very narrow spectrum of AI and are far from what is said to be AGI (Artificial General Intelligence).
Roughly speaking, AG0 is just a DeepNeural Network that takes the current state of a Go board as input, and outputs a Go move. Not only is this much simpler than the original AlphaGo, but it is also trained purely through self-play (pitting different AlphaGo Zero neural nets against each other; the original AlphaGo was ‘warmed up’ by training to mimic human expert Go players). It’s not exactly right that it learns ‘with no human help’, since the very rules of Go are hand-coded by humans rather than learned by AlphaGo, but the basic idea that it learns through self-play rather without any mimicry of human Go players is correct.
The true state of AI research has fallen far behind the technological fairy tales. We have to treat this field with a healthier dose of realism and skepticism else the field may be stuck in this rut forever.
Among the ever-distant goalposts for human-level artificial intelligence (HLAI) are the ability to communicate effectively and the ability to continue learning over time.
Highly-publicized projects like Sophia convince us that true AI which is human-like with conscience is right around the corner. But in reality, we’re not even close. Even for Sophia, the initial hype later converted to a lot of questions and skepticism in the AI community.
Right now, AI doesn’t have free will and certainly isn’t conscious — two assumptions people tend to make when faced with advanced or over-hyped technologies, Mousavi said. The most advanced AI systems out there are merely products that follow processes defined by smart people. They can’t make decisions on their own.https://futurism.com/artificial-intelligence-hype
Currently at the most, what is available in AI is Machine learning and Deep Learning. Both of these approaches build on top of Statistics and Data Mining.
Machine Learning enables computers to learn narrow domains without being explicitly programmed. The techniques are
- Supervised learning – Labeled data, used for image and pattern recognition
- Unsupervised Learning – Unstructured data instead of labeled. Mostly used for clustering and grouping
- Semi supervised – Combination of the two
- Reinforcement learning – The program tries to get to the goal by repeatedly learning from the outputs produced
- Ensemble – Combination of various models primarily bagging, boosting, stacking and bucketing
Deep Learning – a sub-field of Machine Learning which build algorithms using multi-layered artificial neural networks. Some of the prominent successes have been the Google translate which runs on neural networks. There are several drawbacks to Deep Learning including large volume of clean, labeled data, heavy compute power in form of GPUs and TPUs (Tensor processing units).
These advancements in the space of ML, NLP, Deep Learning have improved the field and we are probably a step forward, however, when anyone tells you that they are building a AI system be sure to ask about the AGI (Artificial General Intelligence). Without this it might be another system doing Deep Learning or Machine learning in a narrow AI space.