The next frontier
A year after ChatGPT’s seismic debut, Va. companies prep for AI
You come down with coldlike symptoms. Flu season is here, and a new COVID subvariant is circulating. As the illness lingers, you question whether you should see a doctor.
Imagine putting your symptoms into a chatbot connected to your doctor’s office or health system that can retrieve your medical records, evaluate your information and recommend next steps.
“It could make recommendations on … should you be seen by one of our providers in the emergency room? Should you have a virtual visit with a provider? Should you have just a conversation with a triage nurse? Or do you need to schedule an appointment with a provider?” says Dr. Steve Morgan, senior vice president and chief medical information officer at Roanoke-based health system Carilion Clinic.
Such a scenario isn’t science fiction — it exists now, through artificial intelligence-powered tools like Microsoft’s Azure Health Bot.
“Although we don’t have it now, we’re building the infrastructure to be able to employ that type of technology,” Morgan says. Carilion has already embraced other AI software, like a dictation system for medical notes.
One year after ChatGPT came on the scene, redefining expectations for AI capabilities, industries have already begun adopting AI chatbots in varying forms, including creating their own models. In this Wild West of rapidly developing tech, companies’ workforce training methods range widely, from encouraging employee exploration to structuring rollouts.
Generative AI tools like ChatGPT — AI platforms used to synthesize new data, rather than just analyze data as AI has been traditionally designed to do — are built on large language models (LLMs) that are essentially “glorified sentence completion tools,” says Naren Ramakrishnan, the Virginia Tech Thomas L. Phillips Professor of Engineering and director of Tech’s Sanghani Center for Artificial Intelligence and Data Analytics.
“They sound so realistic and so compelling because they have been trained or learning on a ridiculous amount of data,” enabling the AI engines to learn which words make sense in context, he explains.
OpenAI’s ChatGPT became a household word shortly after OpenAI released a demo of the conversational AI platform on Nov. 30, 2022. ChatGPT is capable of performing many of the same tasks as human knowledge workers — ranging from drafting emails, business letters, reports and marketing materials to performing paralegal duties, writing computer code, putting data into spreadsheets and analyzing large amounts of data — and it can produce finished work in as little as one second to a few minutes, depending on length and complexity. In March, OpenAI released an updated model, ChatGPT-4, available to subscribers. GPT-4 scored better than 90% of human test-takers on the Uniform Bar Exam, the standardized bar exam for U.S. attorneys.
Generative AI has garnered huge investments. Microsoft has reportedly invested $13 billion in OpenAI since 2019, and Amazon announced in September that it would invest up to $4 billion in Anthropic, an OpenAI rival that has also received $300 million in funding from Google.
In a survey of 1,325 CEOs released in early October by KPMG, 72% of U.S. CEOs deemed generative AI as “a top investment priority,” and 62% expect to see a return on their investment in the tech within three to five years.
Generative AI is developing at a blistering pace. On Sept. 25, OpenAI released a version of ChatGPT that can listen and speak aloud. It’s also able to respond to images.
AI is already changing the work landscape, says Sharon Nelson, president of Fairfax-based cybersecurity and IT firm Sensei Enterprises. “It’s a bolt of lightning. … We’re seeing it go at the speed of light, and I can only imagine that it will go faster still.”
As the tech has quickly progressed, large Virginia companies have formally adopted AI tools and are creating standard AI training policies and processes for their employees.
Reston-based Fortune 500 tech contractor Leidos is providing varying levels of training for employees based on their needs, ranging from those who need to build awareness of AI to subject matter experts. Leidos builds curricula with a mix of external courses
from suppliers like Coursera and in-house content, says deputy chief technology officer, Doug Jones.
Like many companies, Leidos is creating an internal AI chatbot, although the company also plans to offer it to customers. The chatbot will focus on IT and software questions, allowing workers to search for answers specific to the firm.
Businesses with troves of documents can easily adapt an LLM to be specific to their documents and processes, Ramakrishnan says: “I’m noticing everybody wants to create their own LLM that’s specific to them that they can control. Because they certainly do not want to send their data out to OpenAI.” Because ChatGPT learns from its interactions with humans, information entered into the tool could be shared with another user.
Businesses are also taking advantage of generative AI tools built specifically for their industries.
Virginia’s largest law firm, Richmond-based McGuireWoods, is beginning to use CoCounsel, an AI tool designed for attorneys and built on GPT-4 that should allow attorneys to enter client data securely in the near future. Thomson Reuters acquired CoCounsel’s developer, Casetext, in April for $650 million in cash.
CoCounsel has a range of uses, like drafting a discovery response or helping an attorney brainstorm questions for a specific deposition. An attorney preparing to depose an expert witness could feed the tool the expert’s published papers and ask it to summarize them or ask it whether the expert has ever taken a position on a particular subject in them, explains McGuireWoods Managing Partner Tracy Walker.
A widening reach
ChatGPT isn’t always a reliable source, as it sometimes can fabricate detailed answers, a phenomenon referred to as “hallucinations.” One attention-grabbing misuse of ChatGPT that demonstrated this problem occurred when lawyers representing a client in a personal injury case against Avianca Airlines cited six fabricated cases as legal precedent, based on research using ChatGPT. A federal judge fined the firm — Levidow, Levidow & Oberman — and two lawyers $5,000 apiece.
Walker stresses that responsible attorneys will look up and read cases cited by an AI chatbot, but CoCounsel also provides a safeguard, says Peter Geovanes, McGuireWoods’ chief innovation and AI officer: It’s been instructed not to provide an answer if it does not know it.
McGuireWoods is taking a two-phased approach to CoCounsel’s rollout. The first phase, which started in September and is running through the end of the year, is a pilot program with about 40 attorneys. While Casetext completes its security review of CoCounsel, McGuireWoods’ pilot group is using only public data to test hypothetical uses of the tool. Starting in early 2024, McGuireWoods’ phase two testing will likely expand to about 100 attorneys.
In the meantime, Geovanes is leading foundational training about generative AI. The firm’s first brown bag webinar session was set for Oct. 17. Although the curriculum is designed for attorneys, recordings will be available for any interested employee. McGuireWoods also plans to offer outside courses about the responsible and ethical use of generative AI.
For attorneys selected for the pilot program, the firm will also offer specialized training from Casetext on “prompt engineering” — how to phrase questions to the chatbot to get the desired responses.
In Roanoke and the New River Valley, Carilion is preparing to pilot a new layer of an existing AI-powered transcription tool built for clinicians. The system has used Nuance’s Dragon Medical One, which transcribes clinicians’ notes as they speak, for “a number of years,” Morgan says.
Microsoft purchased Nuance for $19.7 billion in March 2022. In March 2023, Nuance launched Dragon Ambient eXperience (DAX) Express (now DAX Copilot), which is based on GPT-4. It listens to a clinician-patient conversation and drafts clinical notes seconds after the appointment. Morgan hopes to begin piloting DAX in the first quarter of 2024. Because they’ve used Dragon, practitioners likely won’t need much training to adjust to DAX, he says.
Additionally, Carilion is participating in a pilot test of an AI component in the MyChart patient portal offered by Carilion’s electronic medical records vendor, Epic. The AI tool is designed to draft responses to patient questions sent through the portal, taking into account a patient’s medications and medical history. Six Carilion practitioners are participating in the pilot, which started in September, receiving on-the-fly training from Epic and providing feedback.
Examining new terrain
Smaller Virginia companies with fewer resources seem to have taken a more cowboy approach to the new AI frontier, setting ground rules before encouraging employees to explore generative AI tools on their own.
Will Melton, president and CEO of Richmond-based digital marketing agency Xponent21, is also leading a regional work group focused on preparing Richmond’s workforce for AI. Xponent21 initially used Jasper, an AI software tool for writing and marketing, but the firm now uses ChatGPT for tasks like information analysis and developing initial copy, which then goes through human editors.
“I think that the biggest thing that these tools give us is freeing up time that is … spent on monotonous activities that don’t have a lot of value,” like helping employees spend less time writing social media posts or blogs and more time speaking with clients, he says.
Ben Madden, board president for the Northern Virginia Society for Human Resource Management, has begun using ChatGPT in his HR consulting work, asking the AI tool to draft job descriptions and synthesize information for presentations and policy documents.
“Having it be able to do tasks that may take longer without having the support of supercomputers behind it is where I continue to probably see it being used and being able to make my life easier as either a business owner or even for my clients,” says Madden, whose one-person consultancy, HR Action, is based in Arlington County.
Another Richmond-based business beginning to adopt AI is accounting firm WellsColeman, which updated its internet acceptable use policy to include guardrails for AI and ChatGPT usage, like prohibiting employees from entering client data into the platform.
Nevertheless, the firm has encouraged its employees to get familiar with ChatGPT, says Managing Partner George Forsythe. In full firm meetings, leadership will sometimes demonstrate how they’ve recently used ChatGPT, and staff can ask questions or discuss possible uses.
“We’re using [ChatGPT] as an initial step in gaining familiarity with areas that are not part of our everyday expertise. It’s an easy way to get a broad brush on any topic area,” says Forsythe. After verifying the information given, staff can use it as a starting point for their research.
Forsythe has consulted ChatGPT with general management questions like how to communicate with an employee having leadership challenges and has also used it as a marketing aid.
“When it comes to selling our services, I’ve asked it to put together a proposal and make it intriguing and have a hook,” Forsythe says, and he’s been pleased with the results.
Similarly, Winchester-based accounting firm YHB is using generative AI tools for marketing questions that aren’t firm-specific.
“Our team uses [ChatGPT] a ton to help understand and interpret tax laws and information like that,” says Jeremy Shen, YHB’s chief marketing officer. They’ll also ask the chatbot if a website post will have a high search engine optimization score.
The firm is working on selecting an AI tool to formally implement, whether ChatGPT Enterprise, Microsoft’s Copilot or another. For now, “we just kind of said, ‘We know you’re using it. We know people are using it. Here’s some guardrails … but discover and let us know if you come up with something useful,’” Shen says.
The new steam engine?
Out of 31,000 people surveyed across 31 countries, 49% are worried that AI will replace their jobs, according to a Microsoft survey released in May. That same month, a CNBC/SurveyMonkey poll found that 24% of almost 9,000 U.S. workers surveyed are worried that AI will make their jobs obsolete.
It’s not an unfounded fear. In 10 years, AI automation could replace about 300 million full-time jobs, according to a March report from Goldman Sachs researchers, but it could also raise the global GDP by 7%, or nearly $7 trillion. In May, IBM CEO Arvind Krishna said AI could replace up to 7,800 jobs — 30% of the company’s back-office workers — over five years.
A refrain commonly heard among AI’s proponents is, “AI won’t take your job, but someone who knows how to use AI will.” It’s paraphrased from a statement made by economist Richard Baldwin, a professor at the International Institute for Management Development, during the 2023 World Economic Forum’s Growth Summit.
“I see some paralegals perhaps being replaced by AI, and only some, because there are some paralegals that have other advanced skills as well,” says Nelson with Sensei Enterprises, who is also an attorney and former president of the Virginia State Bar. Lawyers who do simpler tasks like drafting wills or divorce contracts might be vulnerable to being supplanted by AI, too, she says.
Comparisons to prior technological advances abound. “When the world switched from horse-drawn transport to motor vehicles, jobs for stablehands disappeared, but jobs for auto mechanics took their place,” Federal Reserve Board of Governors member Lisa D. Cook said in a September speech at a National Bureau of Economic Research conference. Workers’ adaptability will depend on their “portfolio of skills,” she said.
Supporters say AI will make employees more productive, which can help industries weather labor shortages and let workers put aside rote tasks to focus on higher-level work, which could increase their job satisfaction.
In the world of government contracting, the constraints on some workers, like getting security clearances and working in-person in a classified environment, can make hiring difficult, says Leidos’ Jones.
“We actually find sometimes we can take some of the tasks that are not as engaging for our own employees [like data entry] … off their plate, and they can spend more time doing the things that are really powerful and unique to humans,” he says.
Forsythe also sees AI as an aid to staff: “Right now, the war is for talent. … If we can’t find more people, one of the things we can do is try to change their roles … and support them in manners that make their jobs easier, not so that way they’ll do more work, but so that way they remain part of the firm and don’t feel overburdened,” he says.
Or it could just improve workers’ quality of life. In an early October interview with Bloomberg Television, JPMorgan Chase CEO Jamie Dimon predicted that time savings from AI could result in a universal 3.5-day workweek — though he also said that he anticipates that AI will result in lost jobs.
While AI will eliminate jobs, it will also create them, experts say. The Washington, D.C., region had almost 1,800 listings for AI-related jobs at the end of August, according to Jones Lang LaSalle. Leidos and Boeing were among the companies with the most openings for AI positions.
New roles are emerging, like “prompt engineers” who develop and refine prompts or queries for AI tools to get the most valuable and appropriate responses. At the end of September, OpenAI rival Anthropic was seeking a “prompt engineer and librarian” hybrid position in San Francisco with a salary range of $250,000 to $375,000.
“The people who study the future of work, they say that certain jobs will go away,” Ramakrishnan says, “… but then there will probably be new jobs created that we don’t know yet.