The Present and Future of AI: A Slack Takeover with Raj Subramanian
Thought leaders throughout the software community are taking over the TechWell Hub for a day to introduce themselves, answer questions, and engage in conversations.
Raj Subramanian is a developer evangelist for Testim.io. He provides stable, self-healing, AI-based test automation, as well as mobile training, consulting, and other contributions to the testing community. @Raj was the most recent expert to take over the Slack community, which led to some insightful discussions.
“Hey @Raj Why AI? Why now? What is the aim of AI?” —@Jason Hamilton
It seems like everything today is marketed as being powered by AI—whether it actually is or not.
The reason behind this push, Subramanian said, is the massive amount of data being generated by Google, Amazon, Facebook, and other apps. It would be impossible to manually comb through trillions of lines of data to find relevant information, so companies are investing in AI to do it for them.
Subramanian related what he calls a freaky story: “Based on the buying patterns at Target, AI models could predict whether that particular person is pregnant and send targeted advertisements in emails and posts. That is how crazy it can get in terms of finding patterns.”
Bias in AI
“Hi Raj - can you talk a little bit about testing AI, specifically as it relates to driving out bias (or even identifying bias)?” —@Melissa
“In the majority of the AI models we see, we have no idea how it learns from data and makes decisions based on that data. So essentially AI is a black box,” Subramanian said.
Identifying biases by getting AI to justify its decisions is currently one of the top areas of AI research, but it would be impossible to drive out all biases. “It’s like saying ‘My software is 100 percent defect-free,’” Subramanian said. “The only thing we could do is minimize the extent to which bias influences AI’s decisions. For that, we need people who have a diversity context to train the AI model.”
Weak and Strong AI
“@Raj, the other day we talked about narrow AI vs. strong AI. Can you explain to me what the difference is?” —@owen
The AI we currently have is weak, or narrow, AI, which does one specific task better than humans but fails in others due to a lack of conscience or emotional intelligence. For example, Subramanian said, take AI-based virtual assistants like Google Home and Alexa: They are good for giving responses based on information they can find on the internet, and that’s about it.
“But the device cannot have a real conversation with you, debate with you intelligently, or console you when you are sad,” Subramanian said. “These involve human emotions, which is difficult to implement in AI. … They are fast and clever, but not intuitive, emotional, or culturally sensitive.”
Strong AI will be able to work, think, and react like human beings. Who knows when we’ll reach that point, but even when we do, Subramanian says there’s no reason to fear machines replacing humans. “Just like automation, AI is just a tool and is a means to do something that consumes a lot of time,” he said. “It’s not all doom and gloom.”