Navigating when and how to use AI in the work world depends on where you work. For many organizations, it is a love-hate relationship.
Woman struggles to figure out how to use AI in her job. (Illustration by News Decoder based on photo by charliepix.)
This article is the second part of a two-part series, by News Decoder founder Nelson Graves, that explores how AI is being used in schools and professional organizations. The first part, “Can you use AI intelligently” was published 12 December. Both articles were written exclusively for News Decoder’s global news service. It is through articles like this that News Decoder strives to provide context to complex global events and issues and teach global awareness through the lens of journalism. Learn how you can incorporate our resources and services into your classroom or educational program.
Can you use artificial intelligence in your career without worrying that the robot will replace you? Should you worry that you might lose your job if you use AI too much? That’s what people all over the world wonder as some employers embrace AI apps and tools while others don’t allow it.
I asked four News Decoder alumni to tell us how they navigate the tricky world of AI in jobs. I also posed the questions to Jane Barrett, the head of AI Strategy at the Reuters international news agency, where I spent much of my journalistic career. Their answers might prove helpful to young people who recently joined the workforce or are about to do so.
Alexandra Gray-Harkins, a senior marketing professional, said her company strongly encourages her to use an internal AI tool to write emails, outline content and marketing campaigns and support administrative tasks like writing performance reviews and job descriptions. “As a marketer, I use many AI tools in my day-to-day work and I work on larger strategic AI initiatives,” she said. “I also attend regular AI training.”
But St. John’s University, where she is also a doctoral candidate in Multi-Sector Communication, discourages her from using AI because her dissertation must be an original research paper.
Giuliana Nicolucci-Altman, who coordinates climate research and innovation at the International Rescue Committee, said that every workplace she has been in since ChatGPT was released has allowed the use of generative AI tools.
“I think there is still a general sense of trust that if these tools are used, they’re being used ethically and closely monitored for accuracy,” she said. “I’ve even been encouraged to use the tool to improve efficiency in a sector that’s facing increasing demand and a diminishing workforce.”
AI policies put in place
At Reuters, Barrett said all journalists are encouraged to use AI daily and expects to have everyone using it daily by the end of the year.
To get there, the company is investing heavily in training and building its own tools so that when reporters make queries, the data won’t leak outside the organization.
Moreover, they can build their own AI tools and even basic apps. “Some of our most enterprising journalists have come up with some amazing tools to improve and speed up their reporting,” Barrett said.
“If there is a tool that could be of wider use, the central newsroom AI team tests it before making it available more broadly.”
Other AI tools Reuters is developing will do such things as extract facts from press releases and put them in the right format for fact-checking and distribution; draft short stories based on press releases or archive material; transcribe audio and video; translate material; package content into digital formats; write headlines and summaries and flag issues in copy for an editor to check.
Some guidelines for using AI
News Decoder has developed Ethical Guidelines for Using AI that focus on transparency, accuracy, verification, fairness and avoiding bias. All authors who create and publish content with News Decoder — both student journalists and professional correspondents — are expected to follow these guidelines:
– Take responsibility for your work.
– Disclose AI use.
– Illustrate, don’t imitate.
– Add information about AI-generated media.
– Verify AI-generated content.
– Watch out for AI bias.
– Be inclusive.
– Only tell real stories.
For more comprehensive information, download our Guide for Students and Guide for Teachers.
But even when people are encouraged to use AI, that use comes with restrictions and these restrictions will differ from workplace to workplace.
Rules for use
At Reuters, Barrett said, there is a set of AI principles that all journalists must follow and a corporate policy that covers the use of AI for all use of data and tools throughout the organization.
“We have a rule that no visuals may be created or edited using generative AI as news photos must show reality as it happened in front of the camera,” she said. “All the tools we are creating and approving for wider use are based on taking source material, creating content or analysis from that and, crucially, checking the veracity before publishing. Everything must keep to our tone and standards.”
At Reuters, all reporters and photojournalists are accountable for everything they publish, Barrett said. “If we find that there has been irresponsible use of AI, there is a chain of custody through our editing systems which means we can track back to where the AI was used badly,” she said.
Reuters is trying to stay ahead of the game in a world that is rapidly incorporating AI into just about everything. But not all organizations have the resources to keep up.
For many of the people Savannah Jenkins works with, AI is viewed as a direct threat to their business. Jenkins is a communications manager at Onja, a social enterprise in Madagascar that trains underprivileged youth to become software developers. “It’s one of the world’s poorest nations and the jobs these students land after the program allow them to support their families and extricate themselves from poverty,” Jenkins said. “AI is a direct threat to entry-level coders and the enterprise is having to adapt to this threat.”
Still, she acknowledged that overall, it is generally accepted that AI is here to stay and that it can benefit even small organizations. “As a comms professional working in the nonprofit space, there are a lot of tools that can help small, under-resourced teams do more, especially around content development,” she said. “For example, the AI-powered tools in Canva allow smaller outfits to deliver high quality graphics.”
An AI future in flux
The bottom line is that we are in an experimental period where a very new technology is still being developed and tried out in different ways that are new and untested.
This creates all kinds of worries for people like Barrett.
“I worry that somebody will steal a lead on us,” she said. “Another publisher, a competitor and, most likely, one of the AI companies coming up with a whizz-bang AI-driven news service or product that damages our business, our industry and democracy of well-informed people.”
She also worries that someone will use a tool that has not properly been tested and inadvertently divulge information from Reuters that shouldn’t go out to the public.
Her worries aren’t confined to internal use at Reuters. “I also worry about people getting into arguments or obsessive conversations with AI tools,” she said. “There is increasing proof that the sycophancy and attempts to keep users engaged with the chatbots can be very bad for you.”
Questions to consider:
1. Why is the use of AI in the work work so inconsistent?
2. Why is it important for corporations and non-profits to have policies in place on the use of AI?
3. Do you feel prepared to use AI in any job you might get?
Nelson Graves is the founder of News Decoder. A dual American-French citizen, he has worked as a foreign correspondent and educator on three continents. Recently he published a memoir entitled “Willful Wanderer”. He lives in France.
