Machine Intelligence

Machine Intelligence

Recent advancements in artificial intelligence have serious people discussing or even claiming the existence of artificial general intelligence (AGI). In particular, large language models (LLMs) demonstrate a major milestone. Are they a major step towards AGI or a statistical trick? During 'Machine Intelligence', we explore these topics from numerous perspectives.

A Psychopathological Approach to Safety in AGI
A Psychopathological Approach to Safety in AGI

While the possibilities with AGI emergence seem great, it also calls for safety concerns. On the show, Vahid Behzadan, an Assistant Professor of Computer Science and Data Science, joins us to discuss the complexities of modeling AGIs to accurately achieve objective functions. He touched on tangent issues such as abstractions during training, the problem of unpredictability, communications among agents, and so on.

Why Machines Will Never Rule the World
Why Machines Will Never Rule the World

Barry Smith and Jobst Landgrebe, authors of the book “Why Machines will never Rule the World,” join us today. They discussed the limitations of AI systems in today’s world. They also shared elaborate reasons AI will struggle to attain the level of human intelligence.

Evaluating Jokes with LLMs
Evaluating Jokes with LLMs

Fabricio Goes, a Lecturer in Creative Computing at the University of Leicester, joins us today. Fabricio discussed what creativity entails and how to evaluate jokes with LLMs. He specifically shared the process of evaluating jokes with GPT-3 and GPT-4. He concluded with his thoughts on the future of LLMs for creative tasks.

AI for Mathematics Education
AI for Mathematics Education

The application of LLMs cuts across various industries. Today, we are joined by Steven Van Vaerenbergh, who discussed the application of AI in mathematics education. He discussed how AI tools have changed the landscape of solving mathematical problems. He also shared LLMs' current strengths and weaknesses in solving math problems.

AI Fails on Theory of Mind Tasks
AI Fails on Theory of Mind Tasks

An assistant professor of Psychology at Harvard University, Tomer Ullman, joins us. Tomer discussed the theory of mind and whether machines can indeed pass it. Using variations of the Sally-Anne test and the Smarties tube test, he explained how LLMs could fail the theory of mind test.

AGI Can Be Safe
AGI Can Be Safe

We are joined by Koen Holtman, an independent AI researcher focusing on AI safety. Koen is the Founder of Holtman Systems Research, a research company based in the Netherlands.

Computable AGI
Computable AGI

On today’s show, we are joined by Michael Timothy Bennett, a Ph.D. student at the Australian National University. Michael’s research is centered around Artificial General Intelligence (AGI), specifically the mathematical formalism of AGIs. He joins us to discuss findings from his study, Computable Artificial General Intelligence.

Brain Inspired AI
Brain Inspired AI

Today on the show, we are joined by Lin Zhao and Lu Zhang. Lin is a Senior Research Scientist at United Imaging Intelligence, while Lu is a Ph.D. candidate at the Department of Computer Science and Engineering at the University of Texas. They both shared findings from their work When Brain-inspired AI Meets AGI.

A Long Way Till AGI
A Long Way Till AGI

Our guest today is Maciej Świechowski. Maciej is affiliated with QED Software and QED Games. He has a Ph.D. in Systems Research from the Polish Academy of Sciences. Maciej joins us to discuss findings from his study, Deep Learning and Artificial General Intelligence: Still a Long Way to Go.

Prompt Refusal
Prompt Refusal

The creators of large language models impose restrictions on some of the types of requests one might make of them.  LLMs commonly refuse to give advice on committing crimes, producting adult content, or respond with any details about a variety of sensitive subjects.  As with any content filtering system, you have false positives and false negatives.

Automated Peer Review
Automated Peer Review

In this episode, we are joined by Ryan Liu, a Computer Science graduate of Carnegie Mellon University. Ryan will begin his Ph.D. program at Princeton University this fall. His Ph.D. will focus on the intersection of large language models and how humans think. Ryan joins us to discuss his research titled "ReviewerGPT? An Exploratory Study on Using Large Language Models for Paper Reviewing"

Why Prompting is Hard
Why Prompting is Hard

We are excited to be joined by J.D. Zamfirescu-Pereira, a Ph.D. student at UC Berkeley. He focuses on the intersection of human-computer interaction (HCI) and artificial intelligence (AI). He joins us to share his work in his paper, Why Johnny can’t prompt: how non-AI experts try (and fail) to design LLM prompts.  The discussion also explores lessons learned and achievements related to BotDesigner, a tool for creating chat bots.

Which Professions Are Threatened by LLMs
Which Professions Are Threatened by LLMs

On today’s episode, we have Daniel Rock, an Assistant Professor of Operations Information and Decisions at the Wharton School of the University of Pennsylvania. Daniel’s research focuses on the economics of AI and ML, specifically how digital technologies are changing the economy.

Cuttlefish Model Tuning
Cuttlefish Model Tuning

Hongyi Wang, a Senior Researcher at the Machine Learning Department at Carnegie Mellon University, joins us. His research is in the intersection of systems and machine learning. He discussed his research paper, Cuttlefish: Low-Rank Model Training without All the Tuning, on today’s show.

LLMs in Music Composition
LLMs in Music Composition

In this episode, we are joined by Carlos Hernández Oliván, a Ph.D. student at the University of Zaragoza. Carlos’s interest focuses on building new models for symbolic music generation.

LLMs in Social Science
LLMs in Social Science

Today, We are joined by Petter Törnberg, an Assistant Professor in Computational Social Science at the University of Amsterdam and a Senior Researcher at the University of Neuchatel. His research is centered on the intersection of computational methods and their applications in social sciences. He joins us to discuss findings from his research papers, ChatGPT-4 Outperforms Experts and Crowd Workers in Annotating Political Twitter Messages with Zero-Shot Learning, and How to use LLMs for Text Analysis.

The Defeat of the Winograd Schema Challenge
The Defeat of the Winograd Schema Challenge

Our guest today is Vid Kocijan, a Machine Learning Engineer at Kumo AI. Vid has a Ph.D. in Computer Science at the University of Oxford. His research focused on common sense reasoning, pre-training in LLMs, pretraining in knowledge-based completion, and how these pre-trainings impact societal bias. He joins us to discuss how he built a BERT model that solved the Winograd Schema Challenge.

LLMs for Evil
LLMs for Evil

We are joined by Maximilian Mozes, a PhD student at the University College, London. His PhD research focuses on Natural Language Processing (NLP), particularly the intersection of adversarial machine learning and NLP. He joins us to discuss his latest research, Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities.

Agents with Theory of Mind Play Hanabi
Agents with Theory of Mind Play Hanabi

Nieves Montes, a Ph.D. student at the Artificial Intelligence Research Institute in Barcelona, Spain, joins us. Her PhD research revolves around value-based reasoning in relation to norms. She shares her latest study, Combining theory of mind and abductive reasoning in agent‑oriented programming.

Emergent Deception in LLMs
Emergent Deception in LLMs

On today’s show, we are joined by Thilo Hagendorff, a Research Group Leader of Ethics of Generative AI at the University of Stuttgart. He joins us to discuss his research, Deception Abilities Emerged in Large Language Models.

Do LLMs Make Ethical Choices
Do LLMs Make Ethical Choices

We are excited to be joined by Josh Albrecht, the CTO of Imbue. Imbue is a research company whose mission is to create AI agents that are more robust, safer, and easier to use. He joins us to share findings of his work; Despite "super-human" performance, current LLMs are unsuited for decisions about ethics and safety.

arXiv Publication Patterns
arXiv Publication Patterns

Today, we are joined by Rajiv Movva, a PhD student in Computer Science at Cornell Tech University. His research interest lies in the intersection of responsible AI and computational social science. He joins to discuss the findings of this work that analyzed LLM publication patterns.

GraphText
GraphText

On the show today, we are joined by Jianan Zhao, a Computer Science student at Mila and the University of Montreal. His research focus is on graph databases and natural language processing. He joins us to discuss how to use graphs with LLMs efficiently.

 

Which Programming Language is ChatGPT Best At
Which Programming Language is ChatGPT Best At

In this episode, we have Alessio Buscemi, a software engineer at Lifeware SA. Alessio was a post-doctoral researcher at the University of Luxembourg. He joins us to discuss his paper, A Comparative Study of Code Generation using ChatGPT 3.5 across 10 Programming Languages.  Alessio shared his thoughts on whether ChatGPT is a threat to software engineers. He discussed how LLMs can help software engineers become more efficient.

Program Aided Language Models
Program Aided Language Models

We are joined by Aman Madaan and Shuyan Zhou. They are both PhD students at the Language Technology Institute at Carnegie Mellon University. They join us to discuss their latest published paper, PAL: Program-aided Language Models.

A Survey Assessing Github Copilot
A Survey Assessing Github Copilot

In this episode, we are joined by Jenny Liang, a PhD student at Carnegie Mellon University, where she studies the usability of code generation tools. She discusses her recent survey on the usability of AI programming assistants.

Deploying LLMs
Deploying LLMs

We are excited to be joined by Aaron Reich and Priyanka Shah. Aaron is the CTO at Avanade, while Priyanka leads their AI/IoT offering for the SEA Region. Priyanka is also the MVP for Microsoft AI. They join us to discuss how LLMs are deployed in organizations.

AI Platforms
AI Platforms

Our guest today is Eric Boyd, the Corporate Vice President of AI at Microsoft. Eric joins us to share how organizations can leverage AI for faster development.

LLMs for Data Analysis
LLMs for Data Analysis

In this episode, we are joined by Amir Netz, a Technical Fellow at Microsoft and the CTO of Microsoft Fabric. He discusses how companies can use Microsoft's latest tools for business intelligence.

Amir started by discussing how business intelligence has progressed in relevance over the years. Amir gave a brief introduction into what Power BI and Fabric are. He also discussed how Fabric distinguishes from other BI tools by building an end-to-end tool for the data journey.

Amir spoke about the process of building and deploying machine learning models with Microsoft Fabric. He shared the difference between Software as a Service (SaaS) and Platform as a Service (PaaS).

Amir discussed the benefits of Fabric's auto-integration and auto-optimization abilities. He also discussed the capabilities of Copilot in Fabric. He also discussed exciting future developments planned for Fabric. Amir shared techniques for limiting Copilot hallucination.

Q&A with Kyle
Q&A with Kyle

We celebrate episode 1000000000 with some Q&A from host Kyle Polich.  We boil this episode down to four key questions:

1) How do you find guests

I LLM and You Can Too
I LLM and You Can Too

It took a massive financial investment for the first large language models (LLMs) to be created.  Did their corporate backers lock these tools away for all but the richest?  No.  They provided comodity priced API options for using them.  Anyone can talk to Chat GPT or Bing.  What if you want to go a step beyond that and do something programatic?  Kyle explores your options in this episode.

Uncontrollable AI Risks
Uncontrollable AI Risks

We are joined by Darren McKee, a Policy Advisor and the host of Reality Check — a critical thinking podcast. Darren gave a background about himself and how he got into the AI space.

AI Roundtable
AI Roundtable

Kyle is joined by friends and former guests Pramit Choudhary and Frank Bell to have an open discussion of the impacts LLMs and machine learning have had in the past year on industry, and where things may go in the current year.