In our previous post we have talked about different types of agents that can be built for business. Any type of agent (model-based, goal-based, utility-based, etc.) can be built as a learning agent (or not). Learning allows the agent to know more than what it initially started with in terms of the operating environment.
Components of a Learning Agent
The learning agent can be divided into four components
- Learning Element – Responsible for making improvements.
- Performance Element – Selecting and taking appropriate action. Takes the percepts from the sensor and passes on action to the actuator
- Critic – Gives measures on how the agent is doing. The percepts received by themselves do not tell about the metrics of success. The critic puts the metrics in perspective. Learning element takes constant feedback from the critic to pass on better decision making logic to the performance element
- Problem generator – This is like the chaos monkey. It would cause conditions which would lead to informative experiences. It can suggest exploratory actions which would be taken by the performance element and the knowledge of the results would be passed back to the Learning Element.
If we go through the sequence of steps, this is how the action might flow. The sensor gets an input from the environment (1). This inputs goes to the critic and the performance element (3). The performance element has to take an action on the basis of the input for which it consults with the Learning element to decide the best action (4). This action is passed to the actuator (5) which passes it to the environment (6).
At all times in the background or in parallel (T1), there is a process of learning which is running between the critic, learning element, problem generator and performance element which allows the problem generator to suggest alternative actions on the basis of what the learning element is learning. It allows the performance element to take actions and keep learning from that. This allows the learning cycle keeps going on.
Working together of components
Working of components together can be pretty involved depending on the use case. There are a few ways in which components can interact with each other
- Atomic – This is the simplest way. The components are not concerned about the internal state of other components. They treat them as black boxes with input and output. The output of the component B is just dependent on the input from component A. As an example, in our case the performance element might just get a knowledge from Learning element and base its action on that.
- Factored – This representation splits the state between variables and attributes. It tries to look into the state of the individual component. Many important areas of AI are based on factored representations, including constraint satisfaction algorithms, propositional logic, planning , Bayesian networks, and various machine learning algorithms.
- Structured – This is the most complex of the three. Each state has a series of object each of which might have attributes of its own and relationship to other objects. There would be many real life scenarios in which the variable and attribute associations would not occur in Factored representations. As an example for a drone to cross a tree an association might be present where the height of the drone and height of the tree might result in a true or false. As an instance, in this scenario, HeightOfDroneGreaterThanHeightOfTree might be true but if it is dependent on other factors like birds flying around the tree then such representations would not be possible in factored approach and would require a structured representation.
What kind of agents are you setting up for your AI practice. Knoldus would be excited to help you on the journey.