It's Time to Be Cautiously Optimistic about Artificial Intelligence
Artificial intelligence (AI) is an ongoing trend today. Whether it be in college projects, start-ups figuring out possible ventures, or large organizations looking at research and collaboration opportunities, AI is at the top of the portfolio of services. However, the picture is not all rosy—even for the insiders who are heavily invested in this space. They are being cautiously optimistic about the potential it holds and the potential adverse impact if the threshold is exceeded.
Movies are a first view into what may happen with building powerful systems. Experts in the field of AI are cautioning that AI systems would be a top risk for the human race in the coming years. But is the picture all gloomy and guaranteed to go downhill? Certainly not. It's exciting to see the potential that AI holds across disciplines, especially in medicine and education, among others. The potential it holds in providing global solutions is literally without bounds at this time.
Clearly AI is turning out to be a double edged sword, capable of addressing practically all unsolved problems today, but it definitely holds a lot of lessons for us to learn before rolling out implementations, such as the role the testing community can play in mitigating the potential threats and maximizing the gains. In fact, AI has a huge role to play in enabling testers to perform better too.
AI is one area where the quality engineering teams have a very proactive role to play—a role that extends well beyond defined requirements and will stretch the role of a tester to not only being an end user but also a product or program manager. In addition, the role here becomes one of a gatekeeper in looking beyond the horizon on what implications the product would have in the marketplace. The role ensures what is built is built right, and the right product being built becomes everyone’s responsibility.
Thankfully for AI, whether it be in start-ups or large companies, the teams that handle these solutions today are very core and niche. There is not much room for issues around communication, consensus gaps, etc.—right or wrong, the entire team is in this together. If teams get into a collective brainstorming mode early on, in throwing light on implications of building a product, which may seem beneficial in the short run but could have serious repercussions in the long run, the true benefit of AI can be reaped.
Sooner or later, AI solutions and applications will be subjected to regulations and mandates, which will streamline the industry better. Such policing is required in light of the adverse effects AI could potentially have. If product teams, with quality playing a key role, can start the internal policing today, we would be well on our way to cautiously and optimistically embracing AI.