AI: Rational Agents and Operating Environments

Reading Time: 3 minutes

Introduction

In our previous blog on understanding the basic AI concepts, we touched upon the creation of Rational Agents. Concept of rationality can be applied to wide variety of agents under any environments. In AI, these agents should be reasonably intelligent.

The AI, much touted about today is a lot of smoke without fire. The goal is to create an Intelligent Agent which can behave reasonably well in an environment guided by constrained rationality.

Agent in an Environment

A Rational agent is any piece of software, hardware or a combination of the two which can interact with the environment with actuators after perceiving the environment with sensors.

For example, consider the case of this Vacuum cleaner as a Rational agent. It has the environment as the floor which it is trying to clean. It has sensors like Camera’s or dirt sensors which try to sense the environment. It has the brushes and the suction pumps as actuators which take action. Percept is the agent’s perceptual inputs at any given point of time. The action that the agent takes on the basis of the perceptual input is defined by the agent function.

Hence before an agent is put into the environment, a Percept sequence and the corresponding actions are fed into the agent. This allows it to take action on the basis of the inputs.

An example would be something like a table

Percept SequenceAction
Area1 DirtyClean
Area1 CleanMove to Area2
Area2 Clean Move to Area1
Area2 DirtyClean

Based on the input (percept), the vacuum cleaner would either keep moving between Area1 and Area2 or perform a clean operation. This is a simplistic example but more complexity could be built in with the environmental factors.

For example, depending on the amount of dirt, the cleaning could be a power clean or a regular clean. This would further result in introducing a sensor which could calculate the amount of dirt and so on.

This percept sequence is not only fed into the agent before it starts but it can also be learned as the agent encounters newer percepts. The agent’s initial configuration could reflect some prior knowledge of the environment, but as the agent gains experience this may be modified and augmented. This is achieved through reinforcement learning or other learning techniques.

The idea is that the agents much suited for the AI world are the ones which have immense computing power at their disposal and are making non trivial decisions. If you look at the earlier post, they need to learn, form perceptions and correlations and then act rationally as intelligent agents.

Rational Behavior

For the rational agent that we have defined above, though it would clean the floor but it would needlessly oscillate between the two areas. Thus it is not the most performant agent. May be after a few checking cycles, if both Area1 and Area2 are clean then it should just goto sleep for some time. The sleep time could exponentially increase if the next time again there is no dirt.

So the idea is that we define a performance measure which would define the criteria of success. The success would also have the costs (penalties associated with it). For example

Action Points
Moving from one area to another-5
Suction noise-2
Cleaning20
OthersX

Now for the agent to perform effectively, it would be guided by the above penalty scores. For example, it is moves recklessly between the areas, it loses 5 points every time so it has to be prudent of its movement. Whenever it cleans, apart from gaining 20 points, it loses 2 points so it has to make sure that it cleans when the dirt is beyond the defined threshold. Similarly there could be other penalty point associated.

The performance measure has to be well defined as well. For example in this case it might be the Average cleanliness of areas over time. Hence the agent would try to keep the area clean with the minimum penalties.

Hence, in essence, the rational behavior depend on

• The performance measure which defines the criteria of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence realized and learned to date.

Thus, the idea would be to optimize the agent on the basis of the success criteria and the associated penalties such that it maximizes its performance.

Knoldus is a team of engineers who build high performance AI systems using functional programming. We have 5 offices across 4 geographies and would be happy to assist you in making your business more competitive with technology. Give us a shout and we will be happy to share how we are designing enterprise AI programs for our customers. Have a splendid New Year 2020

Written by 

Vikas is the CEO and Co-Founder of Knoldus Inc. Knoldus does niche Reactive and Big Data product development on Scala, Spark, and Functional Java. Knoldus has a strong focus on software craftsmanship which ensures high-quality software development. It partners with the best in the industry like Lightbend (Scala Ecosystem), Databricks (Spark Ecosystem), Confluent (Kafka) and Datastax (Cassandra). Vikas has been working in the cutting edge tech industry for 20+ years. He was an ardent fan of Java with multiple high load enterprise systems to boast of till he met Scala. His current passions include utilizing the power of Scala, Akka and Play to make Reactive and Big Data systems for niche startups and enterprises who would like to change the way software is developed. To know more, send a mail to hello@knoldus.com or visit www.knoldus.com

Leave a Reply