Jeremy Howard, an influential figure in AI and co-creator of ChatGPT, carries a sense of personal failure.
In the midst of the vibrant city of San Francisco, Howard found himself immersed in a groundbreaking project that would revolutionize the world and pave the way for remarkable innovations
Yet, this Melbourne-born tech entrepreneur and data scientist possessed an audacious idea that would conquer this hurdle and grant AI unfettered access to the vast expanse of human knowledge, meticulously recorded throughout history.
In 2017, he embarked on a transformative journey to address the challenge of natural language processing (NLP) in AI, enabling machines to comprehend and generate human-like text. This journey led to the birth of ChatGPT, a tool that emerged within a span of five years and has revolutionized writing and research.
However, despite his significant contributions to NLP, Jeremy Howard laments the consolidation of AI technology under a few dominant corporations, expressing concern that the implications may surpass their initial expectations.
From Experiment to Empowerment
In late 2017, Jeremy Howard embarked on an experiment to train a machine learning AI system to read and write.
He utilized a large language model (LLM) trained on English Wikipedia and tested its ability to understand sentiment in movie reviews. The LLM achieved an impressive 93% accuracy in inferring sentiment.
Recognizing the potential impact of this technology, Jeremy and his wife Rachel Thomas founded fast.ai, an online university offering free machine learning courses to democratize access to AI.
Sebastian Ruder, a PhD student at the time, collaborated with Jeremy and acknowledged the significant contribution he made to the field.
However, despite their efforts, Jeremy expresses concern about the concentration of power and the wealthy individuals working to restrict AI’s accessibility.
Big Tech’s Entry: The Involvement of Tech Giants in the Development of Language Models
Howard’s work on training language models on general text inspired Google to develop the Transformer architecture, which was then used by OpenAI to create GPT. GPT was trained on a large dataset of text, and it proved to be more capable than its predecessors.
However, training these larger models was expensive, and the small AI companies that developed them needed money to continue their work.
This led to big tech companies becoming involved in the development of LLMs, which raised concerns about the future of open AI research.
Meanwhile, OpenAI’s founding mission to democratize AI research has been challenged by the financial demands of developing cutting-edge AI models.
The company’s initial commitment to non-profit status was abandoned in favor of partnerships with big tech companies, including Microsoft and Amazon, to secure the resources needed to advance its technology.
This shift has raised concerns about the potential for big tech to exert excessive control over AI development, but it also highlights the growing influence of AI companies in the global landscape.
The Race For AI Dominance: Wealth, Power, And The Future Of Democracy
In the world of AI, concerns about the concentration of power are widespread. Notable figures like Yoshua Bengio, a leading AI researcher, express worry that a few dominant companies could pose a threat to democracy by holding immense power and influence.
Rumman Chowdbury, a Harvard Fellow specializing in responsible AI, describes the current AI landscape as an intense competition, with a small number of individuals amassing significant wealth and power.
Microsoft (OpenAI), Google, Amazon (Anthropic), and Meta are identified as frontrunners in the AI race. Jeremy Howard, along with Rachel Thomas, has relocated to Australia, where he teaches machine learning as an honorary professor at the University of Queensland.
Despite his significant contributions to the field, Mr. Howard questions his achievements, expressing dismay at the wealthy individuals who are working to restrict AI’s accessibility.
Source: PhilNews24 | November 15, 2023