Latest From Science & Engineering, Medicine & Innovation [03.03.15]

 

Google

 

Digital Car

“NXP Semiconductor Inc. agreed to buy Texas-based Freescale Semiconductor Inc. in a cash-and-stock deal valued at about $11.8 billion. The purchase would vault Netherlands-based NXP to No. 1 supplier of chips for cars.”

 

Deep Learning  #DeepLearning  #MachineLearning  #BigData

 

Mobile Computing

Subfields Of Sciences As Inspiration For Machine Learning Algorithms/Paradigms

  • Perceptrons and Neural Networks were inspired by Models of Neuron of the brain. So Neuroscience is obviously a major inspiration.
  • Genetic Algorithms, Genetic Programming, Evolutionary Algorithms are inspired by Genetics and Evolutionary Theory.
  • Simulated annealing [1] Algorithm was invented for solving problems in Statistical Physics and later used in Optimization problems in Artificial Intelligence and Machine Learning.
  • Reinforcement Learning was first studied in Psychology, more specifically in Behavioral Psychology. Now Reinforcement Learning is a branch of Machine Learning. 
  • Statistics is the field most closely tied with Machine Learning apart from Computer Science. Many Regression and Clustering techniques from Statistics act as inspiration to Machine Learning Algorithms. 
  • Bayesian Models, (Hidden) Markov Models were first studied as part of Probability Theory. 
  • The study of Logic acts as the basis for many Knowledge Based Machine Learning Paradigms:
    • Explanation Based Learning
    • Relevance Based Learning
    • Inductive Logic Programming

References:

Quora Question: What Are Some Research Projects In Facebook AI Lab?

Quora Question: What Book Serves As The Best Introduction To Machine Learning?

 

Introductory Books on Machine Learning

 

 

Quora Question: What Are The Best Machine Learning As A Service Companies and Startups?

Google Prediction API [1] [2] is a Machine Learning API As A Service API.

WolframAlpha PRO [3] can be considered as a “Data Computation as a Service”.

Have you tried them yet? 

Links

  1. Google Prediction API – Developer’s Guide
  2. Google Prediction API
  3. WolframAlpha Pro

Machine Learning Algorithms: Brief Introduction

Machine Learning Algorithms: Brief Introduction

Supervised Learning
You have examples of inputs and outputs. You have to learn a model that turns inputs into outputs.

  • Decision Tree Learning
    • If you want to learn a propositional logic-based theory.
    • Learning from features 
      • Propositional sentences on features.
      • If 18 < age < 35, sex = male, location = Chittagong, Then the person is likely to vote for “Nagorik Shakti”.
  • Bayesian Learning
    • Flat input (vectors, etc.); not features.
      • Language Applications heavily use Bayesian Learning.
    • Counting occurrences (Frequency – Probability). 
    • Apply Bayes rule; rigorous definition of terms – not required; rather define terms to suit your purpose and computational power. 
  • Neural Network Learning
    • Learn weights on inputs.
      • Learning parameters of equations. 
  • Genetic Programming
    • You join programs together, mutate programs to see if your program can generate required outputs from inputs.
    • How do you represent programs?
      • You represent programs as trees (remember Context free grammar – CFG?) Lisps are natural, but any language can be used. 


Knowledge Based Learning

You have some knowledge. As you gain new knowledge, how do you incorporate it to your existing knowledge base?

  • Explanation Based Learning
    • You solve a problem and extract general principles from your solution for future reuse to other problems.
    • Suppose, you have logically inferred that A => B => C => D. Then you can conclude (and learn) that A => D. From that point on, whenever you see A, you can replace it with D. This is Explanation Based Learning in action.
    • One kind of memoization (storing results to avoid recomputation), but much more general.
  • Relevance Based Learning
    • You know a general rule. You learn something. And then using the general rule that you knew before and the newly learned knowledge, you infer (learn) a new generalization.
    • You knew A & B => C. You learn that A and B are true. You infer (learn) that C is true.
  • Inductive Logic Programming
    • From examples of logical sentences, you infer (and hence learn) general rules.
    • Example: From examples of family relations (say, Jack is Father of Anthony, Sam is Grandparent of Jill, etc. you learn that
      • for all Z and X, Grandparent(Z, X) => Parent(Z, Y) & Parent(Y,X) for some Y. 
      • Can induce new terms / relation. For example, ILP can learn “Parent” from examples of Father and Mother relations.
      • Inductive Logic Programming has been used to make Scientific Discoveries
        • Learning new relations and predicates are as important a step in scientific discovery as forming new rules. Introduction of the concept “acceleration” helped Newton form his famous law F (force) = ma (mass * acceleration). Without the concept of acceleration (which Galileo had invented earlier), it would have been very difficult for Newton to come up with a law that describes relationship between Force and rate of change in position of an object.
        • In the above example, we saw that our ILP program invented the concept “Parent” and used it to learn (and compactly represent) the rule for “Grandparent”.


More on Machine Learning

This is my 200th Post on tahsinversion2.blogspot.com!

Application Of Data Analytics, Mining, Machine Learning & Network Science To Election Campaign Strategy

Application Of Data Analytics, Data Mining, Machine Learning & Network Science To Election Campaign Strategy


Analysis Of Political Survey Data

  • Rows contain data of each participant in the survey (Age, M/F, Area, Profession, Which candidate are you going to vote for, Why, Which party did you vote for in 2008 Election, Why did you vote for that candidate, Which party did you vote for in 2001 Election). 
  • Columns are features.
  • The goal of Data Analysis is to group voters together to determine strategies. 
    • Our candidate is weak in that particular area of his constituency.
      • How do we win votes using our network map?
    • Our candidate is weak among that particular age group of his constituency.
    • Our candidate is weak among people belonging to that particular profession of his constituency.
      • What social initiatives can we take for people belonging to that particular profession? 
    • Swing voters – x% of total voters. 
      • People who voted for candidates from different parties in 2001 and 2008 Elections.
    • For people belonging to that profession, the reason behind candidate preference is “X”.
      • From answers to our survey question – “Why did you vote for that candidate?” 
    • For people belonging to that age group (say, young generation), the reason behind candidate preference is “Y”.
      • What can we do to win the votes of this age group? Look at the reason
  • Usage of Machine Learning Algorithms for extraction of patterns from Data.
    • Decision Tree Learning can be utilized for predicting candidate preference of a particular voter from the voter’s features. 
      • A Decision Tree might learn, for example, if a voter 
        • 18 < age < 35
        • area = “X”
        • is a Male
      • Then, he will vote for “Nagorik Shakti”. 




Usage Of Network Map

Larry Page: Where’s Google Going Next?

Larry Page wants to change the world.


“Organizing The World’s Information”, Search, Artificial Intelligence & Machine Learning

  • 15 years of work on search and organizing the world’s information but it is not at all done.
  • Future of search – contextual, personal. Current: Google Now.
  • Google’s focus on Artificial Intelligence and Machine Learning led it to buying DeepMind. Combines Neuroscience and Machine Learning (AI’s European flavor!) to develop Video Game playing Softwares. 
  • More Machine Learning: Speech Recognition (Google Now), automatically forming the concept of “Cat” from Youtube Videos (“Deep Learning” Algorithm [1]) are in this line.

Google Improves Lives Of People

  • How a Kenyan farmer finds his crop’s problems with Google Search and how people use Google search to solve problems in their lives.
  • Google Loon – providing internet access to two-third population of the world who do not have it. World wide mesh of balloons that can cover and provide Internet access to the whole world.

Security and Privacy

  • Wants to be notified about the surveillance program.
  • Thinks of providing anonymous access to patient records to Doctors for improving Healthcare.
  • View on Privacy – sharing information with the right people in the right way lead to great things.

Transportation

  • Super excited about the prospects of Self driving car and transportation system to come. Autonomous Self driving Cars save lives, save space. 
  • “Bikes above the street!” – Bikes moving on strings hanging in the air. 

Technology & Future

  • The more you know about technology, the more possibilities you see.
  • In technology, we need revolutionary change, not incremental change.
  • Lots of companies don’t succeed over time. What do they fundamentally do wrong? They miss the future. What is the future gonna be and how do we create it? So Larry tries to focus on that – what is the future gonna be and how do we create it and how do we focus our organization on that. So the most important trait – curiosity – looking at things that no one else is looking at, working on stuffs that no one else is working on and taking the risk to make it happen.
Previous Articles


Reference

What is Machine Learning?

One of the most sought-after skills of a modern Software Engineer or Computer Scientist is Machine Learning.

So, what is Machine Learning?

What we know about computers is that programmers write programs and computers follow the steps specified in programs. Wouldn’t it be wonderful if computers could learn and improve their performance with experience / feedback?

That’s where Machine Learning comes in.

Machine Learning algorithms enable computers learn from data and / or experience (experience, especially in the case of knowledge based learning) to improve their performance.


How do computers do that?

Let’s start with inductive learning.

Traditional programming – writing functions that turn inputs to outputs.
Inductive Learning – learning / inducing functions from inputs and outputs.

 


There is another aspect to Machine Learning. In some problem domains, it’s just too hard to write a program that solves the problem. For example, it’s too hard to write a program that can recognize handwritten letters or turn speech to text.


These are all application areas for Machine Learning.

Example Applications

  • US Postal Service – recognition of handwritten postal code
  • Amazon product recommendation 
  • Netflix movie recommendation
    • Netflix declared a $1 Million prize for the first program that can improve it’s movie recommendation algorithm by 10%. Alas! the prize has already been won! But there are quite a lot of money waiting to be won at Kaggle if you are interested[3]! 
  • Facebook’s customized Newsfeed for each and every user
  • Gmail’s spam Email detection
  • Google’s Self driving car – object recognition
  • iPhone Siri – speech recognition


Let’s try to define Machine Learning.

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.”
– Tom Mitchell [1].

So, Machine Learning Agents are agents whose performance improve with experience.


When do we use Machine Learning? 
  • Existence of Patterns.
    • Pattern: Repeated feature among a set of objects; general characteristics; generalizations.
    • Existence of patterns means it could be possible for us to build a model that explains our data.
    • Sidenote: Mathematics is the study of patterns.
  • Hard to write an equation or algorithm that solves the problem.
  • Availability of Data [2]. 


Machine Learning algorithms classified in accordance with the feedback available.

  • Supervised Learning
    • Learning / inducing a model / function from example input-output pairs.
  • Unsupervised Learning
    • Learning patterns from inputs when no output is supplied.
  • Reinforcement Learning
    • Learning how to behave from feedback given at the end of a sequence of steps.



Knowledge Ontology for Machine Learning

  • Supervised Learning
    • Recommendation
    • Classification
      • Bayesian Learning
      • Neural Network
        • Deep Learning
      • Decision Tree
      • Support Vector Machine
      • Genetic Programming
      • Ensemble Learning
    • Regression
  • Unsupervised Learning
    • Clustering
    • Finding Independent Features
  • Knowledge Based Learning 
    • Once you have learned something, how do you keep adding to your knowledge?
    • Brings together Knowledge Representation and Machine Learning.
      • Explanation Based Learning
      • Relevance Based Learning
      • Inductive Logic Programming
  • Statistical Learning
    • Learning Hidden Markov Models
  • Reinforcement Learning

References

  1. Machine Learning by Tom Mitchell
  2. Learning From Data by Yaser-S-Abu-Mostafa and others
  3. Kaggle
  4. Coursera course on Machine Learning by Andrew Ng
  5. Artificial Intelligence: A Modern Approach
  6. Programming Collective Intelligence

Knowledge Ontology (AI) For Machine Learning (AI)

  • Recommendation
  • Supervised Learning
    • Classification
      • Bayesian Learning
      • Neural Network
        • Deep Learning
      • Decision Tree
      • Support Vector Machine
    • Regression
  • Unsupervised Learning
    • Clustering
  • Statistical Learning
    • Learning Hidden Markov Models 
  • Knowledge Based Learning
    • Explanation Based Learning
    • Relevance Based Learning
    • Inductive Logic Programming
  • Reinforcement Learning

Overview of (Artificially) Intelligent Agents

Searching Agent

In Artificial Intelligence, you don’t know the exact solution to the problem. If you knew, you would implement an algorithm to solve the problem and it wouldn’t be termed as Artificial Intelligence. So the approach is searching to solve problems.

Problem Solving – Current State Goal State.

Limitation: You provide the agent with a representation and the representation is static. The agent can only search.

Knowledge Based Agent

Knowledge based agents contain a representation of the world.

Logic – a kind of representation for what is “true” in the world.

Inference (if we know these and that are true, then what else are true?) – helps agents go beyond what you provided as representation.

Example: If
“Man is mortal” and
“John is a man”
are true in our world, then we can “infer” that
“John is mortal”

Other knowledge representation schemes besides all of the different forms of logic are available.

Limitation: You can go only as far as “truth” (only true and false with nothing in between) deducible from your knowledge. Logic based agents are usually very inefficient.

Planning Agent

“Efficiently” “Searching” (Problem Solving) with “Knowledge”. Combines the approach of previous two agents.

Probabilistic Agent

All the facts describing the world are not just either “True” or “False”. There is uncertainty, randomness in the real world. Facts hold to a degree.

Probabilistic agents use Probability Theory or other representations that incorporate randomness and uncertainty.

Goal Based Agent -> Utility Based Agent -> Decision Theoretic Agent
An agent must have a goal / set of goals – otherwise it’s behavior would be random. This is the concept behind goal based agent.

Sometimes agents have conflicting goals. In cases like these, utility theory (“satisfaction from achieving each of the goals”) is employed.

Decision theoretic agents combine utility theory (“how much satisfaction am I going to get from achieving this”) with probability theory (“what is the probability of achieving this”).

Learning Agent

The agents we considered thus far can only utilize information provided by the programmer. But to improve performance with experience, learning is required.

Learning is also required in situations where the problem is so hard that human programmers can’t write the correct program on their own. So they write programs that either change parameters of a model or do logical reasoning to learn themselves.

Agents That Communicate With The Real World

Robots perceive, reason and act in the real world. Web-bots consume natural language documents on the web. Software like Siri communicate with us.

Most of the problems in these domains have proven too hard to be solved by human programmers. Rather Machine Learning Algorithms are implemented. That is, ideas from Learning Agents are used. Ideas from all the above mentioned other agents are utilized as well.

Reference
Artificial Intelligence: A Modern Approach