Manufacturers are implementing artificial intelligence into their processes to boost efficiency and gain a competitive edge.
Eventually, with the right combination of robots, data and software, it could lead to fully autonomous semiconductor fabs or processes that free up the human workforce to solve problems alongside AI, according to executives at an industry conference this week.
Leaders from Intel, EMD, Global Foundries and other computer chip companies discussed their visions for AI at the Advanced Semiconductor Manufacturing Conference in Albany, New York. They also spoke about the challenges and limitations facing the technology as adoption grows across the industry.
From issues around data scarcity to hallucinations, here are some of the key limitations around AI that industry leaders are facing.
Identifying where to get value from AI
Currently, tools like ChatGPT and other large language models can generate human-like text and perform language-related tasks. Some tools can generate video, images or even code, while other forms of AI can create digital replicas of factory floors, handle repetitive tasks or improve quality control through computer vision.
There are countless possibilities for the use of AI in chip manufacturing. However, as companies race to incorporate the technologies into their operations, they can fall into the trap of implementing tools without understanding how they can improve performance.
“People tend to use that hype. Everything is AI. Let's try to do something with AI, but we don't get value out of how much we invested,” Safa Kutup Kurt, global head of plant operations and digital transformation at EMD Technologies, said during a conference panel. “We need to find that balance, and we need to really work on scalability of the solution and generating the value.”
“If we cannot quantify it, well, it's going to be difficult to scale,” Kurt added.
The ‘explainability’ of AI
AI tools can provide answers to solutions, but understanding why they came to their conclusions is tricky.
“Improving just the explainability and building trust in these models is a huge way…that they need to evolve,” Jason Komorowski, senior automation and analytics engineer at Intel Corp., said during the panel.
“We can't just give people a black box and say you input what you want and it'll spit out what you need to do for it. Right? We have to be able to explain it, understand which features are being used, and how we're building and coming to our decisions,” Komorowski added.
Improving partner collaborations on AI
Companies tend to focus on advanced technology improvements within their own organizations. Those sorts of issues are easier to fix, said Pawitter Mangat, vice president of global tapeout and mask ops at Global Foundries. It’s getting the companies aligned with their partners on technologies across the entire end-to-end market that is most challenging, Mangat added.
“We all have different processes, but we have different models,” Mangat said. “How do you optimize that from that perspective, to say we don't have a customized solution for [Global Foundries] versus Intel versus other companies, right? That's where I think the biggest challenge is, and we can all gain if we have a process that allows us that shading, or at least define the baseline that we're aligned in some of these models.”
Data scarcity
In addition to increased collaboration, data availability and accessibility is going to be a challenge for progress and innovation.
AI models require vast amounts of data to operate effectively without variance, and often companies are not willing to share proprietary data with others, Ross Kunz, data scientist at Idaho National Laboratory, said during the panel.
“Not one single model will actually go ahead and provide you the best results that were right,” Kunz said. “ChatGPT is not going to provide you the best results for every single process. And so there has to be some sort of robustness associated with models such that we can actually compare across different industries and develop … methodology such that it can be applied across multiple scales.”
Validation of AI models
A lot of progress has been made to get AI models functional, but data validation remains a challenge.
“We read in the news all the time about hallucinations with Gen AI systems, right?” Komorowski said. “It's very hard to validate if the answer is correct or not without seeing up with a person saying, Yeah, this makes sense. So validation of these models is certainly a limitation.”
While progress around AI has been made to improve shop floor operations, such as the creation of digital part inventories to reduce waste and running full factory simulations, there is more work to be done to get the most out of the advanced technology.
More research and development is needed to ensure the models can generate more deterministic results rather than probabilistic results, Komorowski said. He also underscored the need for improved AI “explainability” and cross-collaboration between vendors and customers.
As progress is made to create standardized platforms and formats for sharing, Komorwoski said that opens up the way Intel and other companies can use AI across the factory floor “so we’re not having to recreate it every time, and that’s a huge enabler.”