AI tools lack ‘expertise’ for site selection
Connecting state and local government leaders
A recent study found that generative AI doesn't produce the same results as humans when helping businesses pick a city or state for a new factory or headquarters. It also doesn't explain how it makes its choices—a troubling finding, researchers say.
Few things get mayors, governors and other elected leaders more excited than attracting new businesses, whether it be companies looking to relocate offices, expand manufacturing facilities or build new headquarters.
The competition to win jobs and grow local economies has always been fierce between cities and between states. But the constant tug of war has especially come into focus in the last couple of years with the passage of the CHIPs and Science Act and subsequent jockeying to be a manufacturing and research hub for semiconductors. And who could forget the race six years ago to win Amazon’s second headquarters and its promise of thousands of jobs?
Often, there’s an unseen player in a company’s decision to relocate. Businesses typically turn to organizations like the Site Selectors Guild, the only association of professional site selection consultants. SSG produces voluminous reports on cities that are potential new locations for businesses, with its legion of consultants fanning out across the country to scout places.
But, as with many industries, artificial intelligence could disrupt the site selection process, removing the need for experts to visit cities in person to evaluate their suitability. In an effort to examine the possible effects of AI, SSG in January pitted AI against human site selectors for two projects.
The study found that the emerging technology could certainly play a role in aiding site selection. It could not, however, solely replace humans. Instead, generative AI, the study concluded, could help produce low-risk results to augment the work already being done.
The two projects studied were the relocation of a software company’s headquarters away from San Francisco and a manufacturing company’s search for a new industrial facility to increase capacity.
David Chiu, a professor of mathematics and computer science at the University of Puget Sound, conducted a blind test of three generative AI platforms—Google Gemini, ChatGPT 3.5 and ChatGPT 4.0—to see if they produced the same site selection results as the people in the field. Regardless of the outcome, Chiu said the project was a good way to “understand or discover opportunities and limitations of the three chatbots that were tested.”
In the end, for the new industrial facility, the study found almost no overlap between the shortlist of locations generated by AI tools and those generated by SSG consultants, with the humans and technology only agreeing on one possible location for the project. For the office relocation project, the AI-generated results were more promising but still lacking when compared to humans. ChatGPT 3.5 produced 9 of the 17 office locations shortlisted by the human consultants, while ChatGPT 4.0 identified seven. Google Gemini only produced three.
That divergence between the AI-generated results and those produced by human consultants suggests that AI cannot be “used in a vacuum,” Chiu said, at least until an effort is made to understand and fact-check the results produced by AI. The lack of transparency over how the AI tools reached their conclusions is troubling and doesn’t allow Chiu to determine, for example, if the results are better.
“Even as it's spitting out the cities, it's not really clear how it came up with the list, what ranked one city over another city,” Chiu said. “You can access it, you can ask it to explain itself and it will, but it's still very shy about telling you exactly how it scored [cities]. You're also unsure about whether the information used for the scoring is even correct.”
Chiu added that in one scenario where he asked ChatGPT 3.5 about the industrial project, it produced only a result where the chatbot urged him to work “with a professional site selection consultant or conducting thorough research and analysis using location-specific data.”
Generative AI could produce some low-risk results quickly, Chiu said, like finding cities in the U.S. that have international airports with a direct flight to Tokyo. Beyond that, AI is limited at this time. “It's good for doing some of the work but you still need some expertise to finish it off,” he said.
NEXT STORY: How the procurement process can help agencies acquire responsible AI