Artificial Intelligence, Emerging Technology
October 22, 2020
AI, Neuralink, and the Evolution of Human-Machine Interfaces
Fast forward to more recent times where people use more sophisticated tools in the form of computer systems. Even these modern digital tools required people to manipulate them though physical inputs (mice and keyboards) and observe the feedback through an externally observable output (monitors). This paradigm of the directly physical Human-Machine interfaces began to change in the past few decades with the rise of Artificial Intelligence (AI) capabilities.
AI has introduced an era where computers are able to not only work through direct physical manipulation, but where the tools themselves are able to observe and make predictions about a user’s intent. For example, computer vision systems record peoples’ natural kinesthetic body movements and gestures, use AI to interpret those gestures into requested actions (input) and display the results on a screen (for example scrolling through a menu). These systems still rely on some physical input from users, although there is not the same direct physical connection. Digital assistants such as Apple’s Siri, Amazon’s Alexa or Google’s Home similarly require a physical input, although a very light touch in the form of a breath of air for a voice command.
On August 28, 2020 Elon Musk and the Neuralink team presented a live demonstration of Neuralink’s latest experimental technology – a device which can be implanted in a skull to both read signals as well as introduce electrical impulses into the outer cortex of a brain. In the presentation, Neuralink showed a video of a pig with a Neuralink implant walking on a treadmill. Predictive algorithms attempted to determine the test-subject pig’s position and motion with surprising accuracy. Although early in development, this technology offers a view into a possible future where the tools of our digital world no longer require physical manipulation for an input or an externally observable output. It is an interesting world to imagine in my mind’s eye – even as I type this observation out on my physical keyboard.
Seth Brooks is a Vice President in FCAT.References & Disclaimers
1 See Neuralink Progress Update, Summer 2020, available at https://www.youtube.com/watch?v=DVvmgjBL74w
948978.1.0
Related posts
Technology & Society, Artificial Intelligence
2022: The Year of the AI Image Generator
Sarah Hoffman
November 22, 2022
While 2021 was the year of monster AI language models and AI text generation, thanks to powerful language models like OpenAI’s GPT-3,1 it seems that 2022 is the year of text-to-image AI systems. We’ve seen AI art before – just last year Sophia the robot’s artwork sold as an NFT for almost $700,0002– but the tools we are seeing now let anyone, regardless of technical proficiency, become an artist, a graphic designer, or an illustrator. And the quality of the output is surprisingly good and will no doubt improve quickly.
Gender Equality Challenges in the Wake of the Pandemic
Sarah Hoffman
January 12, 2021
Before the novel coronavirus swept through the world, things were looking bright for women in the US workplace. For the first time in almost a decade, women made up the majority of the workforce; 42% of businesses were women-owned; women with patents went from 4% in 1976 to almost 22% in 2019; at each level in the pipeline, the number of women was increasing. And then the pandemic hit.
Why all the Fuss about ChatGPT?
Sarah Hoffman
February 14, 2023
While most of the AI buzz of 2022 was around AI image generators like DALL-E 2, Midjourney and Stable Diffusion, the year ended with a bang with the release of OpenAI’s new text generator ChatGPT. Attracting over 1 million users just a few days after launch, the app’s performance made headlines with some even comparing the significance of the release to the debut of the iPhone.1 There are good reasons people are talking about this new technology: