New research addresses the liabilities and risks related to executing AI in the food market, proposing a temporary adoption phase to assess AIs benefits and obstacles, and highlights the need for more research on economic and legal structures.The conversation about deploying synthetic intelligence typically centers on the credibility of AI applications: Are they producing reputable, impartial outcomes and maintaining information personal privacy? A current paper published in Frontiers in Artificial Intelligence raises a special issue: What if an AI is simply too good?Carrie Alexander, a postdoctoral researcher at the AI Institute for Next Generation Food Systems, or AIFS, at the University of California, Davis, interviewed a large range of food market stakeholders, consisting of company leaders and academic and legal experts, on the mindsets of the food market towards embracing AI. A notable concern was whether gaining extensive brand-new knowledge about their operations may accidentally develop new liability threats and other costs.For example, an AI system in a food company may reveal potential contamination with pathogens. Having that details could be a public benefit however also open the company to future legal liability, even if the threat is very little.”The innovation probably to benefit society as a whole may be the least most likely to be embraced, unless brand-new legal and financial structures are embraced,” Alexander said.An on-ramp for AIAlexander and co-authors Professor Aaron Smith of the UC Davis Department of Agricultural and Resource Economics and Professor Renata Ivanek of Cornell University, argue for a momentary “on-ramp” that would allow companies to start utilizing AI, while exploring the dangers, ways and benefits to reduce them. This would likewise provide the courts, legislators and government firms time to catch up and think about how best to utilize the information created by AI systems in legal, political, and regulative choices.”We need methods for businesses to choose in and check out AI innovation,” Alexander stated. Aids, for instance for digitizing existing records, may be handy, particularly for little business.”Were actually intending to generate more research and conversation on what could be a significant issue,” Alexander stated. “Its going to take everybody to figure it out.”Reference: “Safer not to understand? Forming liability law and policy to incentivize adoption of predictive AI technologies in the food system” by Carrie S. Alexander, Aaron Smith and Renata Ivanek, 17 November 2023, Frontiers in Artificial Intelligence.DOI: 10.3389/ frai.2023.1298604 The work was supported in part by a grant from the USDA National Institute of Food and Agriculture. The AI Institute for Next Generation Food Systems is moneyed by a grant from USDA-NIFA and is among 25 AI institutes developed by the National Science Foundation in collaboration with other agencies.