Lydia Chilton on AI Creativity: What’s the Big Idea?
March 10, 2022
Can the design process be enhanced or improved by the power of AI? After a recent Artificial Intelligence Club event, Columbia University’s Lydia Chilton sat down with FCAT’s Sarah Hoffman to discuss the potential of AI as a creative tool. Chilton is an Assistant Professor in the Computer Science Department and is an early pioneer in deconstructing complex problems so that crowds and computers can solve them together. Her current research is focused on computation and how AI can help people with design, innovation, and creative problem solving.
How AI Can Foster Inclusion
December 9, 2021
We've spent a lot of time discussing the unintended bias that can easily creep into AI algorithms. But the same technology, properly designed and trained, can also be used to confront biases. A new generation of automated tools seeks to proactively promote inclusion in:
How Creative AI Can Fasttrack Innovation
September 2, 2021
As companies try to figure out how and when to bring employees back to the office, one often purported benefit of doing so is that the proximity to colleagues – and the chance for spontaneous meetings and conversations – spurs innovation. Others argue, however, that no evidence supports that contention, and instead office culture may actually hamper innovation because needing to be in a prescribed place at specific times excludes some people. 1 And in fact, technology and artificial intelligence have enabled us to move beyond in-person experiences in many ways − just consider how we shop for food and clothing, chose a movie to watch, and even how we date. So can AI enable us to innovate more effectively, with anyone, at any time, and in any location, thereby reducing the need to be in an office?
Teaching a Robot to Read
August 18, 2021
Over the last several years one of the FCAT AI teams - code named “RoboReader” - has been working on processing documents and taking needed information from unstructured text and transforming it into structured data that can be used by the business. In the course of the work, we have noticed parallels in the way we are “teaching” the system and how we read as humans.
The Cost of Manipulating AI
June 23, 2021
We’ve all received email messages with misspellings, designed to outsmart AI-driven spam filters. Perhaps you’ve also heard about how a two-inch piece of tape tricked Tesla cars into speeding up 50 miles per hour.1 What these and hundreds of other anecdotes demonstrate is that it’s relatively easy to manipulate AI systems. Indeed, efforts to "fool" AI have already impacted our industry, where we find numerous examples of people trying to manipulate:
Designing Automated Systems That Humans Will Trust
May 17, 2021
Automation is moving up the value chain, taking on more “human” tasks like financial planning, factory floor management, and home health care. In response, companies need to reevaluate how to build trust in automated systems, considering both their reliability and the human-machine relationship.
New Data Fuels AI Opportunities in a Remote World
March 17, 2021
Now that so many of us are doing almost everything online at home -- shopping, work, doctor appointments, school, financial check-ins, parent-teacher conferences --- you may have noticed some not so subtle behavior changes among those around you. Maybe a colleague has suddenly started blocking her video feed during Zoom calls. Perhaps a customer has started speaking slower, making a lot of spelling errors, or is typing at a different pace. Maybe you’ve noticed a change in the tones of a colleague’s MS Teams or Yammer posts or changes in a customer’s chatbot message style. All of this could be indicative of something important, and AI is an ideal tool to pick up on these changes.
Building Trust in AI Systems
January 28, 2021
Bias in data used by AI algorithms is drawing increasing attention. The internet is full of examples of AI systems bias: recruiting algorithms trained on data that favored male candidates, facial recognition software unable to appropriately identify people of color, medical systems trained with data that is not sufficiently diverse, and many other examples. However, there is another aspect to bias that impacts AI systems and bears some scrutiny as well - the cognitive biases that users bring to the table.
AI, Neuralink, and the Evolution of Human-Machine Interfaces
October 22, 2020
Humans have enjoyed an intimate and physical relationship with technology and how they have interacted with tools throughout time. Throughout history the tools created by people have the same two basic characteristics: tools require that a user manipulate them through some physical input, and the tool or some connected object will respond accordingly and provide user feedback via an externally observable output. For example, at the dawn of time a person might hit a sharp stone tool against a branch (the input) and then see the tool’s effect from the cuts created in the bark (the externally observable output).
Women in Artificial Intelligence – A Conversation
October 16, 2020
FCAT’s Sarah Hoffman, VP AI and Machine Learning Research, sits down for a far-ranging interview with Haptik’s Burka Gurgenidze-Seinau, discussing the AI field, how AI influences our daily lives, and how it may reshape the future. This engaging video conversation covers a multitude of topics, including misconceptions aspiring professional women may have regarding AI and the AI field, how women are influencing AI development and implementation, as well as the broad impact AI is having across culture, commerce, and daily life. Watch here.