In Artificial Intelligence, you don’t know the exact solution to the problem. If you knew, you would implement an algorithm to solve the problem and it wouldn’t be termed as Artificial Intelligence. So the approach is searching to solve problems.
Problem Solving – Current State Goal State.
Limitation: You provide the agent with a representation and the representation is static. The agent can only search.
Knowledge Based Agent
Knowledge based agents contain a representation of the world.
Logic – a kind of representation for what is “true” in the world.
Inference (if we know these and that are true, then what else are true?) – helps agents go beyond what you provided as representation.
“Man is mortal” and
“John is a man”
are true in our world, then we can “infer” that
“John is mortal”
Other knowledge representation schemes besides all of the different forms of logic are available.
Limitation: You can go only as far as “truth” (only true and false with nothing in between) deducible from your knowledge. Logic based agents are usually very inefficient.
“Efficiently” “Searching” (Problem Solving) with “Knowledge”. Combines the approach of previous two agents.
All the facts describing the world are not just either “True” or “False”. There is uncertainty, randomness in the real world. Facts hold to a degree.
Probabilistic agents use Probability Theory or other representations that incorporate randomness and uncertainty.
Goal Based Agent -> Utility Based Agent -> Decision Theoretic Agent
An agent must have a goal / set of goals – otherwise it’s behavior would be random. This is the concept behind goal based agent.
Sometimes agents have conflicting goals. In cases like these, utility theory (“satisfaction from achieving each of the goals”) is employed.
Decision theoretic agents combine utility theory (“how much satisfaction am I going to get from achieving this”) with probability theory (“what is the probability of achieving this”).
The agents we considered thus far can only utilize information provided by the programmer. But to improve performance with experience, learning is required.
Learning is also required in situations where the problem is so hard that human programmers can’t write the correct program on their own. So they write programs that either change parameters of a model or do logical reasoning to learn themselves.
Agents That Communicate With The Real World
Robots perceive, reason and act in the real world. Web-bots consume natural language documents on the web. Software like Siri communicate with us.
Most of the problems in these domains have proven too hard to be solved by human programmers. Rather Machine Learning Algorithms are implemented. That is, ideas from Learning Agents are used. Ideas from all the above mentioned other agents are utilized as well.