https://www.wired.com/story/local-governments-generative-ai/

State and local governments in the United States are scrambling to leverage tools like ChatGPT to ease the burden on bureaucracies, rushing to write their own rules and avoid the many pitfalls of generated AI.

The U.S. Environmental Protection Agency blocks its employees from accessing ChatGPT, while U.S. State Department staff in Guinea use it to draft speeches and social media posts.

Maine has banned its executive branch employees from using generative artificial intelligence for the remainder of the year due to concerns about the state’s cybersecurity. In nearby Vermont, government workers are using it to learn new programming languages ​​and write internal-facing code, said Josiah Raiche, Vermont’s director of artificial intelligence.

The city of San Jose, California, has written a 23-page guide to generative AI and requires city employees to fill out a form every time they use tools like ChatGPT, Bard, or Midjourney. Less than an hour north, Alameda County government has held meetings to educate employees about the risks of generative AI, such as its tendency to spit out convincing but inaccurate information, but doesn’t see the need for a formal policy yet.

“We focus more on what you can do than what you can’t do,” said Sybil Gurney, Alameda County’s assistant chief information officer. Gurney added that county staff “use ChatGPT to do a lot of paperwork” and use Salesforce’s Einstein GPT to simulate users for IT system testing.

At every level, governments are looking for ways to leverage generative AI. State and city officials told Wired they believe the technology can improve some of the bureaucracy’s most unsavory qualities by streamlining routine paperwork and improving the public’s ability to access and understand dense government materials. But governments – bound by strict transparency laws, elections and a sense of civic accountability – also face a different set of challenges than the private sector.

Also Read:  How X Is Suing Its Way Out of Accountability

“Everyone cares about accountability, but when you’re actually government, accountability goes to a different level,” said Jim Loter, interim chief technology officer for the City of Seattle, which announced on 4 released preliminary generative AI guidance for its employees in March. “The decisions made by the government can affect people in quite profound ways and … we have a responsibility to treat our public fairly and responsibly in the actions we take and to be transparent about the methods used to make decisions.”

Last month, an assistant superintendent in Mason City, Iowa, received national attention for using ChatGPT as a first step in determining which books should be removed from the district’s libraries because they contained depictions of sexual acts, illustrating the government’s Risks to Employees. . A recently enacted state law requires the books to be removed.

This level of scrutiny of government officials is likely to continue. In their generative AI policies, the cities of San Jose and Seattle, as well as the state of Washington, have warned staff that any information entered as a prompt into a generative AI tool will automatically be disclosed under public records laws.

This information is also automatically ingested into enterprise databases used to train generative AI tools, and potentially returned to another person using a model trained on the same dataset. In fact, a large study published last November by Stanford University’s Institute for Human-Centered Artificial Intelligence showed that the more accurate large language models are, the easier it is for them to introspect entire chunks of content from their training sets.

This is a particular challenge for health care and criminal justice agencies.

Lott said Seattle staff has considered using generative artificial intelligence to summarize lengthy investigative reports from the city’s Office of Police Accountability. These reports may contain public but still sensitive information.

Also Read:  Leaked Yandex Code Breaks Open the Creepy Black Box of Online Advertising

Staff at Maricopa County Superior Court in Arizona use generative artificial intelligence tools to write internal code and generate document templates. Aaron Judy, director of innovation and artificial intelligence for the courts, said they have not yet used it for public-facing communications but believe it has the potential to make legal documents more readable for non-lawyers. In theory, staff could feed public information about a court case into an AI tool to create a press release without violating any court policies, but she said, “They might get nervous.”

“You’re using citizen input to train the money engine of private entities so that they can make more money,” Judy said. “I’m not saying that’s a bad thing, but we all have to be comfortable saying at the end of the day, ‘Yes, this is what we’re doing.'”

The use of generative AI to create documents for public use is not outright banned under San Jose’s guidelines, but due to the technology’s potential to introduce misinformation and the city’s precision in how it communicates, it is considered “high.” risk”. For example, a large language model asked to write a press release might use the word “citizen” to describe the people who live in San Jose, but the city only uses the word “resident” in its communications. Because not everyone in this city is an American citizen.

Citizen tech companies like Zencity have added generative AI tools for writing government press releases to their product lines, while tech giants and major consulting firms (including Microsoft, Google, Deloitte, and Accenture) are pitching a variety of generative tools at the federal level artificial intelligence products.

Also Read:  https://www.wired.com/story/meta-gdpr-fine-ireland/

The earliest government policies on generating AI came from cities and states, and the authors of several of them told Wired they were eager to learn from other institutions and raise their standards. Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, said the time is ripe for “clear leadership” and “concrete, detailed guidance from the federal government.”

The federal Office of Management and Budget will release draft guidance for the federal government’s use of artificial intelligence sometime this summer.

The first wave of generative AI policies issued by city and state agencies are interim measures that officials say will be evaluated and expanded in the coming months. They both prohibit employees from using sensitive and non-public information in tips and require some level of human fact-checking and review of AI-generated work, but there are significant differences.

For example, guidelines in San Jose, Seattle, Boston and Washington state require employees to disclose their use of generative AI in work products, while the Kansas guidelines do not.

Albert Gehami, San Jose’s privacy officer, said his city and others are increasingly interested in the use cases as they become clearer and as public servants discover how generative AI differs from the already ubiquitous technology. The rules will undergo major changes in the coming months.

“When you work with Google, you type something in and you get a wall of different perspectives, and we’ve been through the rigors of it for 20 years basically to learn how to leverage that responsibility,” Ghami explain. “Twenty years from now, we might solve this problem with generative artificial intelligence, but I don’t want us to be fumbling around in this city for twenty years to solve this problem.”

Categories: Security
Source: thptvinhthang.edu.vn