ELIZA’s Creation: The Intersection of Psychology and Natural Language Research

The creation of ELIZA by Joseph Weizenbaum was deeply rooted in his interest in language and the potential of computers to explore human communication. Inspired by conversations with colleagues like Kenneth Colby, a psychiatrist exploring neurotic behavior through computational models, and Victor Yngve, a pioneer in pattern matching, Weizenbaum sought to test whether complex-seeming responses could arise from simple rules. This curiosity led to ELIZA, a program that simulated conversations in a Rogerian therapy style.

Weizenbaum chose the Rogerian framework because it allowed for an illusion of mutual understanding in dialogue. He likened this to real-life conversations where vague responses often appear appropriate despite incomplete understanding, a dynamic he saw as essential to communication. The psychiatrist-patient setting provided a context where ignorance on the part of the “analyst” wouldn’t break the illusion, as the goal was not factual exchange but to elicit thoughts and feelings from the “patient.”

Rogerian therapy, developed by Carl Rogers, is a form of therapy rooted in humanistic principles. It emphasizes creating a nonjudgmental, empathetic, and supportive environment where the client feels safe to explore their thoughts and emotions. The therapist acts more as a facilitator than an authority figure, focusing on active listening and reflecting the client’s feelings without directing the conversation.

The “illusion of mutual understanding” in Rogerian therapy stems from several key aspects:

  1. Non-directive Approach: The therapist avoids imposing interpretations or leading the conversation, instead encouraging the client to guide the dialogue. This creates the perception that the therapist understands the client’s perspective by simply validating their expressions.
  2. Paraphrasing and Reflection: Therapists often repeat or rephrase what the client says, subtly mirroring their thoughts and emotions. This technique reinforces the client’s sense that their feelings are understood, even if the therapist may not fully grasp the underlying complexities.
  3. Empathetic Neutrality: The therapist maintains a compassionate yet neutral stance, which allows clients to project their own understanding onto the interaction. This lack of overt judgment or analysis makes the exchange feel more reciprocal.
  4. Contextual Flexibility: The therapist doesn’t need specific knowledge about a topic discussed by the client. Instead, they focus on eliciting the client’s personal feelings or associations, sidestepping areas where gaps in understanding might disrupt the illusion.

These elements align closely with ELIZA’s design, where simple pattern-matching and reflective prompts simulate the Rogerian technique, creating the impression of mutual understanding without genuine comprehension. This is popular among psychologists, because if the patient is not intelligent enough to come up with solutions on their own, they can be a source of continuous appointments (and income). So paradoxically the most financially successful would be those who cure the fewest while maintaining the largest pool of patients.

However, the historical interpretation of ELIZA’s purpose is nuanced. While Weizenbaum later described it as a caricature of psychoanalytic dialogue, the title of his 1966 paper—“ELIZA – A Computer Program for the Study of Natural Language Communication Between Man and Machine”—suggests a broader aim: to serve as a platform for exploring human-machine interaction. This duality underscores the complexity of ELIZA’s role in the history of artificial intelligence and its reception by the public.

The Engineering of intelligence

The origins of computational intelligence owe much to the pioneering work of Alan Turing and, to a lesser extent, Ada Lovelace. Turing is celebrated for his Universal Turing Machine, a theoretical construct addressing the Entscheidungsproblem posed by Hilbert and Ackerman, Turing then showed, equivalently to Godel and Church, that there are programs for which it is impossible to prove that they will come to a halt on his machine, which we now call “the halting problem”.

In contrast to Turing’s focus on universal computation, Ada Lovelace foresaw the potential of machines to manipulate symbolic data, such as music, foreshadowing symbolic computing—a cornerstone of AI and a key concept in ELIZA’s development. While Turing’s work laid the theoretical and practical foundations, Lovelace’s insights highlighted the broader applicability of computing beyond numbers.

Though Turing’s Imitation Game (now the Turing Test) remains a touchstone in AI discourse, it holds limited relevance to ELIZA, which was not intended to simulate intelligence but to explore human-machine communication through symbolic interaction.

Joseph Weizenbaum entered this legacy with the creation of ELIZA at MIT in the 1960s. Unlike popular interpretations, ELIZA was not designed to simulate intelligence or serve as a chatbot but to study human-machine interaction. Using the Rogerian framework, ELIZA’s algorithm leveraged the interpretive tendencies of human users. This allowed the program to maintain the illusion of understanding while relying on simple computational mechanisms. ELIZA was implemented in SLIP, a programming tool that enabled symbol manipulation, reflecting Weizenbaum’s background as a software engineer.

Weizenbaum’s primary interest was not in artificial intelligence itself but in how humans interpret machine behavior. ELIZA demonstrated how easily people ascribe intelligence to machines through interaction, a phenomenon that deeply concerned Weizenbaum. He feared that such illusions could mislead users into trusting machines undeservedly, a prescient concern given the subsequent trajectory of AI development.

The legacy of ELIZA took an unexpected turn when a Lisp-based version created by Bernie Cosell spread rapidly through academic and public networks, overshadowing Weizenbaum’s original implementation. This dissemination, coupled with public fascination, resulted in ELIZA becoming a cultural symbol of early AI.
However, the ports to many different languages was able to reveal that the algorithm that ELIZA utilized, was basically long list of if-else statements, that created responses based solely on keywords, without any context. It did not engage in anything akin to reasoning or planning.

The pre-ELIZA years

Before developing ELIZA, Joseph Weizenbaum wrote his first paper in 1962, published in Datamation under the intriguing title “How to Make a Computer Appear Intelligent.” This brief article described a simple algorithm for playing gomoku, a Go-like game. Although Weizenbaum did not code the program himself, he was fascinated by its ability to create the “illusion of spontaneity,” capturing his interest in the user experience rather than the algorithm’s mechanics.

The paper opens with a sharp critique of the field of artificial intelligence, dismissing efforts to define intelligence as sterile and unproductive. Weizenbaum references Marvin Minsky’s suggestion that “intelligent” behavior is often merely an observer’s interpretation of seemingly opaque mechanisms. Once the inner workings of a program become clear, the illusion of intelligence dissolves, leading observers to reclassify the behavior as mechanical or algorithmic. This is the essence of what Robert Polanski claims about humans lacking free will.

Weizenbaum’s critique underscores his skepticism of AI and his interest in the ways users interpret machine behavior. While he appears unimpressed by AI as a discipline, he is captivated by the psychological process through which humans attribute intelligence to simple algorithms. This early work reflects the tension in Weizenbaum’s thinking, which would later culminate in ELIZA. His focus remained less on AI’s technical possibilities and more on the interpretive reactions of users engaging with machines.

#

Interpretation, the process by which cognitive agents assign meaning to experiences, is foundational to understanding and reasoning. This ability, akin to semantics or mental models, allows agents to abstract from specific experiences and draw inferences. Even artificial neural networks, while lacking explicit symbolic structures, engage in a form of interpretation through their diffuse representations.

Weizenbaum, influenced by Kenneth Colby’s work on paranoia, explicitly acknowledged that ELIZA lacked interpretive machinery. Unlike systems that assign meaning to inputs, ELIZA relied entirely on its users keywords for variables to insert into responses. The Rogerian framework was deliberately chosen because it allowed ELIZA to provide vague responses and be ignorant thus leaving the other (the user) to construct meaning, placing the conversational burden on the human interlocutor. Weizenbaum viewed this as a limitation, suggesting that future versions of ELIZA could benefit from the ability to build belief structures and detect user rationalizations and contradictions.

Ultimately, Weizenbaum’s work focused not on ELIZA’s interpretations—which were nonexistent—but on the interpretations made by humans interacting with it. This highlights a central theme of his research: the human tendency to project meaning onto systems that themselves lack understanding, such as genes in DNA.

Weizenbaum saw ELIZA as a tool to explore how humans interpret machine behavior rather than as an intelligent agent itself. He believed that improving ELIZA would require adding interpretive structures to detect rationalizations or contradictions in user input. However, his focus remained on the human side of interpretation—how users ascribed meaning to ELIZA’s responses and projected intelligence onto a program that, by design, had none.

#

Interpretation, the process by which cognitive agents assign meaning to experiences, in ELIZA’s case, applies solely to its textual interactions with users. However, ELIZA lacks interpretive structures or processes, relying entirely on users to project meaning onto its responses.

At its simplest, a sentence can be viewed as a list of words. However, as Chomsky demonstrated, human language is inherently recursive, capable of producing complex, deeply nested structures. These recursive properties make sentences more than mere linear lists; they become trees, a specific type of graph, with elements interconnected by relationships. Language, even at its surface, can be represented in a graph-structure, as demonstrated by pronouns or other references creating explicit links within discourse. This graph-like structure is crucial to many algorithms, which rely on recursive processes to traverse these connections efficiently.

Extended discourse mirrors these principles. While linear on the surface, conversations contain deeper connections—both semantic and pragmatic—that link different parts of the dialogue. These ideas highlight how interpretation, though absent in ELIZA, ties deeply into computational and linguistic structures, forming a bridge between symbolic reasoning and natural language processing.

The difference between ELIZA and AI

Interpretation, as a cognitive process, aligns with Turing’s concept of a computable recursive function. In artificial neural networks (ANNs), interpretation is diffused across the network rather than represented in concise structures, yet ANNs are fundamentally graph-based, as “network” is synonymous with “graph”, However the differences are more akin to the representations of Grid Cells. While ANNs operate through matrix multiplication (creating the grid layout) rather than graph traversal, the two are closely related, and hybrid approaches combining ANNs with classical symbolic AI algorithms are emerging to better understand and guide these systems. This is vastly more complex than what was essentially a long list of if-else statements that ELIZA consisted of.

ELIZA, though limited to shallow recursive transformations of input to output, highlighted the centrality of interpretation in AI interactions. Users naturally interpreted ELIZA as intelligent, drawn into what Weizenbaum termed the “illusion of understanding.” This illusion stems not from abnormal cognitive errors but from the normal, continuous cognitive process of interpretation that humans rely on in all aspects of cognition. Mistaken interpretations, while common, are typically easy to correct when they become relevant, demonstrating the fluidity and adaptability of this cognitive process.

ELIZA as a platform

Following his early work, Joseph Weizenbaum published a series of influential papers, culminating in his 1966 ELIZA paper titled “ELIZA– A Computer Program For the Study of Natural Language Communication Between Man And Machine.” This paper explored ELIZA not as an AI or chatbot but as a platform for studying human interaction with machines, particularly the process of interpretation.

Weizenbaum drew attention to ELIZA’s simplicity, describing how it created the “illusion of understanding” with minimal machinery. He acknowledged that this illusion was not unique to ELIZA; rather, it reflected a broader phenomenon where some humans interpret seemingly intelligent behavior in systems without genuine understanding. This phenomenon parallels later critiques, such as Searle’s “Chinese Room” argument, which similarly highlighted the lack of true comprehension in such systems, and the limitations of such a methodology.

Ultimately, ELIZA was not intended to advance AI but to serve as a platform for experiments in interpretation and human-machine communication. Weizenbaum envisioned studies exploring how users interacted with ELIZA, including its role in a version of Turing’s imitation game. Eliza was not even close to passing the Turing test. His concerns about misinterpretation foreshadowed broader debates on AI ethics and the challenges of distinguishing between true intelligence and the appearance of it, debates that remain critical today. While later critics mischaracterized ELIZA as a failed AI, Weizenbaum’s work positioned it as a study of human cognition and the dynamics of interpretation, an area that remains critically understudied.

Weizenbaum envisioned ELIZA as a tool for exploring how understanding and misunderstanding arise communication. His unpublished outline detailed experiments to investigate partial understanding, discourse correction, and the potential misinterpretations in human-machine interactions. While he outlined potential experiments to explore understanding and misunderstanding in communication, these plans remained unpublished, and ELIZA’s potential as a research platform was not fully realized.

Other researchers, notably at the Stanley Cobb Laboratory for Psychiatric Research, recognized ELIZA’s potential as a tool in psychology. They used a modified version of ELIZA called “YapYap” for studies on dyadic communication, finding that it stabilized one side of conversations, enabling systematic hypothesis testing. These studies inadvertently highlighted the interpretive challenges Weizenbaum had identified. Although these studies did not explicitly address Weizenbaum’s focus on human-machine interpretation, they revealed how such programs could model simplified communication for theoretical testing.

Harold Garfinkel’s team at Massachusetts General Hospital aligned more closely with Weizenbaum’s original vision, examining how human-computer interactions leveraged human social norms to create the illusion of authentic conversation using tactics more commonly associated with astrology. Unfortunately, Garfinkel’s research and similar projects using ELIZA or alternative platforms like LYRIC were never published, and broader research into interpretive phenomena faded over time. Subsequent educational projects with ELIZA focused more on practical applications than on its foundational role in understanding human-computer communication.

#

Joseph Weizenbaum created ELIZA not as an AI but as a platform to study human interpretation, particularly how people interact with programs that seem intelligent. While Weizenbaum provided the DOCTOR script as an example, he never intended ELIZA to be seen as truly intelligent. More advanced programs already existed, such as Colby’s PARRY and others capable of solving math problems or processing natural language queries. Ironically, ELIZA became conflated with AI, undermining the very research Weizenbaum aimed to support and leading to widespread misinterpretation.

This misinterpretation began with Bernie Cosell, who created a Lisp version of ELIZA based on Weizenbaum’s paper shortly after its 1966 publication. As Lisp gained prominence as the preferred language for AI, Cosell’s version of ELIZA overshadowed Weizenbaum’s original MAD-SLIP version, which quickly faded into obscurity.

The association of ELIZA with Lisp persisted for decades, cementing a widespread misunderstanding about its origins. Despite this, the concept and name remained correctly attributed to Weizenbaum, but his intended purpose—to explore human-machine interaction—was largely overshadowed by the program’s reputation as an early example of AI. This misconception between purpose and perception reflects the complex history of ELIZA’s legacy.

#

In 1977, Creative Computing published a BASIC port of ELIZA, coinciding with the release of the Commodore PET, Apple II, and TRS-80—key milestones in the personal computer revolution. This timing, combined with BASIC’s popularity as the primary programming language for hobbyists, led to a proliferation of ELIZA programs. The simplicity of the BASIC version allowed millions of computer enthusiasts to experiment with it, inspiring countless variations across nearly every programming language. As a result, this BASIC ELIZA became the most widely copied and adapted program in computing history.

As BASIC ELIZA spread among the public, it became the most recognized version, while Cosell’s Lisp-based ELIZA dominated in academic circles. Both diverged from the original MAD-SLIP version created by Weizenbaum, which was largely forgotten until its rediscovery in 2021. The popularity of BASIC ELIZA and its knockoffs highlights its enduring cultural impact, as enthusiasts continue to uncover and adapt versions of the program, reflecting the widespread interest in AI-like interactions, ensured ELIZA’s enduring influence on both academia and the general public, with variations continuing to emerge even decades later.

#

Joseph Weizenbaum’s vision for ELIZA as a platform to study human interpretive processes was overshadowed by the program’s unintended success. The DOCTOR script, designed to demonstrate ELIZA’s functionality, became its defining feature, celebrated for its simplicity rather than used to investigate interpretation as Weizenbaum intended.

The simplicity and effectiveness of the DOCTOR script, only designed to demonstrate ELIZA’s functionality, led ELIZA to be celebrated as a standalone achievement, rather than a tool for exploring interpretive processes. This misinterpretation was precisely what Weizenbaum feared, as it shifted focus away from the critical issue of how humans attribute intelligence, credibility, and agency to machine outputs.

Weizenbaum built ELIZA not to pass the Turing Test but to explore how humans interpret and attribute intelligence to machine outputs. Weizenbaum’s concerns echoed Alan Turing’s earlier reflections on the evolving perception of machine intelligence. While Turing speculated that societal attitudes would eventually accept the notion of “thinking machines,” Weizenbaum worried about the implications of this acceptance, particularly how easily people could be misled by the illusion of understanding. He emphasized the need for rigorous investigation into how humans judge the credibility of machine outputs, a concern that has only grown more urgent as internet bots and large language models increasingly influence decision-making.

Weizenbaum feared that the example of ELIZA would overshadow its purpose, a prescient warning as the program became an early symbol of AI itself. Had his research into human-machine interaction progressed, it might have illuminated critical issues about the credibility and intelligence of humans in assessing issues that have only grown more urgent in the modern era. Today, his insights remain strikingly relevant as society grapples with the ethical and cognitive challenges posed by increasingly sophisticated AI systems.

#

ELIZA, because of its historical prominence, and popularity as shorthand for AI by the non-technical, was a remarkably simple algorithm—nowhere near passing the Turing Test. Instances where humans were fooled reflected a lack of understanding or below-average intelligence on the part of the user. Its responses, drawn from a small set of scripted phrases, were brittle and superficial, yet they demonstrated how easily some humans could be led to project intelligence onto a machine.

For software to be considered “AI” it must utilize artificial neural networks specifically, not simply a large script of if-else statements which is the simplest form of an algorithm. non-neural networks are typically deterministic, while Artificial Neural Networks are probabilistic. The Chinese room experiment becomes a relevant example here, as it explains why early language translation programs had such low accuracy.

The most significant change in google translate was in November 2016, was when they re-wrote it to incorporate neural networks, more than doubling the accuracy. This version of google translate relied on LSTM architecture. Modern LLMs, built on transformer architectures, extend this evolution, achieving levels of fluency and adaptability unimaginable in ELIZA’s time. This several orders of magnitude difference in complexity is why you don’t get coherent speech by selecting the next word suggested by spellcheck, but modern LLMs are strikingly articulate.

ELIZA’s legacy, then, is not in its technical achievements but in its cautionary lessons. It underscores the importance of understanding the tools we build and the interpretive processes through which we engage with them. One of its enduring contributions is in disambiguating the differences between traditional algorithms and true AI implementations.

For instance, recommendation and ranking algorithms—used extensively by platforms like Amazon, Google Search, and social media—rely on deterministic processes. These algorithms evaluate predefined metrics, such as user behavior, keyword relevance, or click-through rates, to optimize outputs. While they appear sophisticated, they fundamentally operate as rigid systems, designed to prioritize specific outcomes like product visibility or user engagement. These algorithms are powerful tools but lack any semblance of understanding or adaptability beyond their programmed parameters.

By contrast, true AI implementations, such as large language models (LLMs), utilize artificial neural networks (ANNs) to probabilistically analyze and generate outputs. Unlike recommendation systems, LLMs process language as interconnected relationships within vast datasets, enabling them to produce nuanced and context-aware responses. For example, an LLM like GPT can write a coherent essay on a given topic, while a ranking algorithm simply sorts search results based on relevance scores across a single dimension. Even chess playing algorithms are more complex than that. This fundamental difference lies in the LLM’s requirement to “train” patterns and generate novel combinations of information, compared to the static, rule-based operations of traditional algorithms which do not utilize neural networks, and thus must be programmed rather than trained.

By contrast, true AI implementations, such as large language models (LLMs), utilize artificial neural networks (ANNs) to probabilistically analyze and generate outputs. Unlike recommendation systems, which operate within static, predefined rules, LLMs process language as interconnected relationships within vast datasets. This enables them to produce nuanced and context-aware responses, such as crafting a coherent essay on a given topic. Even more complex algorithms, like chess-playing programs, lack the adaptability and probabilistic modeling inherent in neural networks. This distinction lies in the LLM’s ability to “train” patterns and generate novel combinations of information, whereas traditional algorithms rely on static programming rather than dynamic training.

Recognizing these differences is crucial, especially in identifying software falsely marketed as AI. Many companies advertise deterministic algorithms as AI, misleading consumers with inflated claims. For those seeking clarity, n0c offers testing services to differentiate genuine AI implementations from misrepresented systems. Please use the contact form here for inquiries.

As we advance, ELIZA reminds us of the importance of critically evaluating the tools we build, understanding their capabilities and limitations, and resisting the allure of oversimplification. This includes countering both the cultural misinformation spread by non-technical users and the deliberate disinformation employed by advertisers. Vigilance is especially critical as some companies exploit the term “AI” to avoid accountability for harmful algorithms designed for “engagement.” These systems perpetuate harmful behaviors and shift blame onto AI as a scapegoat. Regulating AI alone will not resolve these issues; many such systems would not qualify as AI under legal scrutiny, a loophole deliberately exploited by AI Snake Oil sellers. The stark difference in impact between the Digital Services Act, which effectively targeted social media, and the EU AI Act, which had little effect, underscores the need for precise and informed regulation.

Dec 3, 2024
Recent developments in Artificial Intelligence have generated a lot of public anxiety, especially regarding ethical issues: data bias, privacy, the opacity of automated decisions, the effects of algorithmic bubbles on democratic debate, not to mention the harms caused by deep fakes – the list goes on. In response, the emerging field of AI ethics aspires to address these issues. The expert panel of this year’s Royal Institute of Philosophy Annual Symposium discuss these issues, and think of ways they might be addressed.

The Panel:
Mark Coeckelbergh, Professor of Philosophy of Media and Technology at the Philosophy of Department of the University of Vienna.

Shannon Vallor, Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy.

Linda Eggert, Early Career Research Fellow in the Faculty of Philosophy, at Balliol College, and the Institute for Ethics in AI at the University of Oxford.

Allan Dafoe is a Principal Scientist at Google DeepMind leading work on Frontier Safety and Governance.

Ritula Shah (chair) is an award-winning journalist and broadcaster. She is the presenter of ‘Calm Classics’ every weekday evening on ClassicFM. Ritula left the BBC in April 2023, after a career spanning almost 35 years.

There is broad consensus across the tech industry, governments and society, that as artificial intelligence becomes more embedded in every aspect of our world, regulation will be essential. But what does this look like? Can it be adopted without stifling innovation? Are current frameworks presented by government leaders headed in the right direction?

Join host Hannah Fry as she discusses these questions and more with Nicklas Lundblad, Director of Public Policy at Google DeepMind. Nicklas emphazises the importance of a nuanced approach, focusing on adaptability and evidence-based policymaking. He highlights the complexities of assessing risk and reward in emerging technologies, advocating for a focus on harm reduction rather than outright bans.

Timecodes:
00:00 Introduction
02:15 Current mood around AI
06:12 Where we are right now
08:05 Regulation needs
14:15 Planning for unknowns
19:40 Uncertainty and progress
24:17 Public and private sector
26:00 Self regulation
28:15 Investments in science
31:18 Risk and reward
32:20 Global approach
33:45 Regulation accelerating innovation
38:02 Unacceptable risk
40:00 Frontier models
47:50 Emerging capabilities
51:42 Technology diffusion
54:06 Hannah’s thoughts

Leaders can’t be afraid to disrupt the status quo, says pharmaceutical CEO Paul Hudson. In conversation with TED’s Lindsay Levin, he shares how AI eliminates “unglamorous work” and speeds up operations while collaborations across competitors can dramatically boost sustainability. Hear some powerful advice for the modern leader — and learn why it’s time for businesses to embrace AI.

Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor, who are recognized by Time magazine as two of the 100 most influential experts in AI, cut through the confusion to give you an essential understanding of how AI works and why it often doesn’t, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don’t work, and probably never will.

On November 8, Arvind Narayanan and Sayash Kapoor met online with the public, joined by Thomas Luo, Founding Partner of GenAI Assembling, for an in-depth discussion on AI’s real value and potential risks in today’s society, focusing particularly on how to recognize and avoid being misled by false advertising in an increasingly complex information landscape.

Oct 30, 2024 The Truth of the Matter In this special episode, Andrew and Gregory C. Allen discuss the National Security Memorandum on AI the Biden administration released on October 24th, its primary audience and main objectives, and what the upcoming U.S. election could mean for its implementation.

Sep 24, 2024
4:40 to 5:55
Artificial Intelligence: how can we harness Europe’s innovative potential?

The EU’s AI Act came into force on 1 August 2024. Though lagging the United States and China in AI development, the EU is striving to establish itself as a global rule-setter in digital technologies. With concerns running high for some about AI-powered techniques such as profiling, targeting, election manipulation, and disinformation campaigns, eurocrats have celebrated the EU AI Act as the globe’s first comprehensive legal framework around AI. Yet the ability of European businesses to catch up with the pace of AI development and implementation remains very much in doubt.

Can European businesses overcome the challenges posed by this stringent regulatory landscape to remain competitive on the global stage? Mark Zuckerberg and Daniel Ek, CEOs of Facebook and Spotify, recently issued a high-profile statement warning that ‘pre-emptive regulation of theoretical harms for nascent technologies such as open-source AI will stifle innovation’ and ‘Europe’s risk-averse, complex regulation could prevent it from capitalising on the big bets that can translate into big rewards’. Others welcome the EU’s attempt to build public confidence in AI and to shape its future direction to maximise public goods and minimise harmful and unforeseen consequences.

In addition to questions over the EU’s risk averse approach to regulation, industry also warn of other drawbacks, including investment shortfalls and scalability issues, that could hinder innovation. As one Silicon Valley entrepreneur has commented, innovation in Europe is held back by an absence of ‘genuine risk-seeking capital at all stages’ pointing to a wider problem of risk aversion. Join us to contribute to the discussion about Europe’s appetite for investment in AI and whether the EU AI Act is stifling innovation or helping to shape the development and use of AI in a way that maximises its potential to do good?

Keith Teare, Silicon Valley based entrepreneur and CEO of SignalRank Corporation

It is part of the event Unshackling Europe’s Economy: what holds us back?

Nov 5, 2020
In this webinar, experts from the National Institute of Standards and Technology provide an overview of their key artificial intelligence initiatives including responsible, trustworthy, and explainable AI.

Mar 21, 2024
Joint stakeout Linda Thomas-Greenfield, Representative of the United States to the United Nations, joined by representatives from the Bahamas, Japan, The Netherlands, Morocco, Singapore and the United Kingdom on the UN General Assembly resolution on Artificial Intelligence.

US Ambassador Linda Thomas-Greenfield, said today (21 Mar), “All 193 members of the United Nations General Assembly have spoken in one voice and together chosen to govern artificial intelligence, rather than let it govern us.”

The General Assembly adopted this morning the draft resolution “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development.”

The text was introduced by the United States of America and co-sponsored by over 120 Member States.

Addressing the press after the adoption of the resolution, US Ambassador Linda Thomas-Greenfield, said, “The groundbreaking resolution adopted today cements global consensus for safe, secure, trustworthy AI systems, systems that both advance sustainable development and respect fundamental freedoms.”

She also said, “A first of its kind resolution adopted not only by majority, but by consensus, because ultimately, the risk and benefits of AI have the potential to impact all of us. And so, approaching it requires all of us. The resolution we just adopted reflected our shared responsibility and collective fate. It leaves no one behind it lays the groundwork for AI systems that leave no one behind either.”

She continued, “Why the UN? Why should this body, with so many existential challenges on its plate, take on AI? And the answer is simple. AI is existential. And not only in its threats, of which there are many, but in its opportunities, of which I believe there are many more.”

Answering a question, she said “AI will be able to bring to people in remote villages access to technology, but access to the basic needs that they didn’t have access to before. And the whole idea of this is to bridge that gap.”

Moroccan Ambassador Omar Hilale said, “This resolution is a steppingstone for existing and future multilateral initiatives on digital cooperation and on AI in particular. This resolution is not a net in itself. But the beginning of our collective journey to shape safe, secure, and trustworthy artificial intelligence systems for sustainable development that leaves no one behind it.”

Addressing the General Assembly earlier this morning, the US Ambassador said, “All told, for all of the unknowns surrounding AI, this much is clear: this technology has the potential to unlock more progress, on more issues, for more people. But in order to ensure equitable access to these benefits, and mitigate risks for communities across the globe, we must approach this technology as a global community. Leaving no one behind.”

At Google, we take responsible AI seriously. Discover why responsible AI matters more than ever, and why we care so much about it. Explore Google’s 7 AI Principles, a framework designed to guide users toward a safer and more ethical use of AI.