Machine learning

From Wikispooks
Jump to navigation Jump to search

Concept.png Machine learning Rdf-entity.pngRdf-icon.png
Interest ofHans-Christian Boos
Algorithms that improve automatically through experience;part of Artificial Intelligence

Machine learning (ML) is the study of computer algorithms that improve automatically through experience and by the use of data.[1] It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.

A subset of machine learning is closely related to computational statistics, which focuses on making predictions using computers; but not all machine learning is statistical learning. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning. In its application across business problems, machine learning is also referred to as predictive analytics.

Machine learning involves computers discovering how they can perform tasks without being explicitly programmed to do so. It involves computers learning from data provided so that they carry out certain tasks. For simple tasks assigned to computers, it is possible to program algorithms telling the machine how to execute all steps required to solve the problem at hand; on the computer's part, no learning is needed. For more advanced tasks, it can be challenging for a human to manually create the needed algorithms. In practice, it can turn out to be more effective to help the machine develop its own algorithm, rather than having human programmers specify every needed step.

Limitations

Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[2][3][4] Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[5]

In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision.[6] Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested.[7][8]

Machine learning has been used as a strategy to update the evidence related to systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves.[9]

Bias

Machine learning approaches in particular can suffer from different data biases. A machine learning system trained on current customers only may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the same constitutional and unconscious biases already present in society. Language models learned from data have been shown to contain human-like biases.[10] Machine learning systems used for criminal risk assessment have been found to be biased against black people.[11][12] In 2015, Google photos would often tag black people as gorillas,[13] and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all.[14] Similar issues with recognizing non-white people have been found in many other systems.[15] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[16] Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that "There’s nothing artificial about AI...It’s inspired by people, it’s created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”[17]


Many thanks to our Patrons who cover ~2/3 of our hosting bill. Please join them if you can.


References

  1. http://www.cs.cmu.edu/~tom/mlbook.html
  2. https://web.archive.org/web/20170320225010/https://www.bloomberg.com/news/articles/2016-11-10/why-machine-learning-models-often-fail-to-learn-quicktake-q-a
  3. https://hbr.org/2017/04/the-first-wave-of-corporate-ai-is-doomed-to-fail
  4. https://venturebeat.com/2016/09/17/why-the-a-i-euphoria-is-doomed-to-fail/
  5. https://www.kdnuggets.com/2018/07/why-machine-learning-project-fail.html%7Ctitle=9 Reasons why your machine learning project will fail
  6. https://www.economist.com/the-economist-explains/2018/05/29/why-ubers-self-driving-car-killed-a-pedestrian
  7. https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/
  8. https://www.wsj.com/articles/ibm-bet-billions-that-watson-could-improve-cancer-treatment-it-hasnt-worked-1533961147
  9. https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-020-01450-2
  10. http://papers.nips.cc/paper/6227-an-algorithm-for-l1-nearest-neighbor-search-via-monotonic-embedding.pdf
  11. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  12. https://www.nytimes.com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html
  13. https://www.bbc.co.uk/news/technology-33347866%7Ctitle=Google apologises for racist blunder
  14. https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai
  15. https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html
  16. https://www.technologyreview.com/s/603944/microsoft-ai-isnt-yet-adaptable-enough-to-help-businesses/
  17. https://www.wired.com/story/fei-fei-li-artificial-intelligence-humanity/