Suncoast Searchlight guidance and policies on using AI in our work.

Last updated: 12/23/2025

Generative artificial intelligence is the use of large language models to create something new, such as text, images, graphics and interactive media. These terms will be referenced throughout this policy:

  • Generative AI — A type of artificial intelligence that creates new content, such as text, images, or media, by interpreting and generating based on input data.
  • Large language models (LLMs): AI systems trained on vast datasets of text to understand and generate human-like language, and is the information backbone that powers Generative AI.
  • AI prompt — A specific input or instruction provided to an AI tool to generate a desired output.
  • Hallucination — The phenomenon where AI generates information or responses that are fabricated, inaccurate, or not grounded in fact.
  • Training data — The dataset — articles, research papers or social media posts — used to teach an AI model patterns, relationships and knowledge for making predictions or generating content.

Although generative AI has the potential to improve news gathering, it also has the potential to harm journalists’ credibility and our unique relationship with our audience.

Core principles of our policy

As we proceed, the following five core values will guide our work. These principles apply explicitly to the newsroom and throughout other non-news departments including advertising, events, marketing and development.

Transparency

When we use generative AI in a significant way in our journalism, we will document and describe to our audience the tools with specificity in a way that discloses and educates. This may be a short tagline, a caption or credit, or for something more substantial, an editor’s note or sidebar. When appropriate, we will include the prompts that are fed into the model to generate the material.

Accuracy and human oversight

All information generated by AI requires human verification. Everything we publish will live up to our long-time standards of verification. For example, an editor will review prompts, and any other inputs used to generate substantial content, including data analysis, in addition to the editing process in place for all of our content.

We are aware of the risk of biases in large language models, and we will actively monitor AI-generated content, ensuring fairness and equity in our journalism to the best of our ability. Our AI steering committee, made up of journalists at Suncoast Searchlight, will regularly evaluate and update our standards to ensure uses and tools are equitable and minimize bias.

Privacy and security 

Our relationship with our audience is rooted in trust and respect. We will never enter sensitive or identifying information about our audience members, sources or our own staff into any generative AI tools.

As technology advances and opportunities to customize content for our audience arise, we will be explicit about how your data is collected — in accordance with our organization’s privacy policy — and how it was used to personalize your experience.

We will disclose any editorial content that has been created and distributed based on that personalization.

Accountability 

We take responsibility for all content generated or informed by AI tools. Any errors or inaccuracies resulting from the use of these tools will be transparently addressed and corrected. We will consider audience feedback in policy updates. And violations of this policy will require retraining and possible disciplinary action.

Exploration

With the previous principles as our foundation, we will embrace exploration and experimentation. We will strive to invest in newsroom training so every staff member is knowledgeable about the responsible and ethical use of generative AI tools.

Logistics

The point person on generative AI in our newsroom is Josh Salman, who is supported by the following AI committee members: Kara Newhouse and Alice Herman. 

We coordinate all use of AI with this committee. This team is also the source of frequent interim guidance distributed throughout our organization. The team seeks input from a variety of roles, particularly those who are directly reporting the news.

Staff should expect to hear regular (at least quarterly) communication from this team with updates on what it is doing and guidance on what activities are generally approved.

In addition, members of this team will:

  • Monitor all software we are using including: content management systems, photo editing software and business software for updates that may include AI tools. Because software changes quickly and AI is being added to nearly every technology product, we will delegate appropriate team members to stay knowledgeable of updates.
  • Write clear guidance about how we will or will not use AI in content generation.
  • Edit and update our AI policy to ensure that it is both internally available and, where appropriate, publicly available (with our other standards and ethics guidelines).
  • Seek input from our audience, through surveys, focus groups and other feedback mechanisms as needed.
  • Highlight needed disclosures about partnerships, grant funding or licensing from AI companies.
  • Understand our privacy policies and explain how they apply to AI and other product development. This includes regularly consulting with editors, lawyers or other privacy experts that influence newsroom policies.
  • Innovate ways to communicate with the audience to both educate them and gather data about their needs and concerns.
  • Outline a clear process on how the policy will be updated, as specific as the number of meetings per month of the committee and who is on the committee, etc.

All uses of AI should start with journalism-centered intentions and cleared by the appointed AI committee. Human verification and supervision is essential. The form we use to collect this information will include the following questions:

  • How do you want to use AI?
  • What is the journalistic purpose of this work?
  • How can you gather knowledge on audience needs and attitudes about your intended use?
  • How should the audience’s needs and attitudes inform your AI use?
  • How will you fact-check the results?
  • Will any material be published?
  • Which journalists will be responsible for overseeing this work and reporting out the results?
  • Which editors or managers will oversee the work?
  • What are the risks, the bad things that might happen? (ie. hallucinations, copyright or legal issues, data privacy violation) What safety nets can you devise to intervene before negative outcomes harm your newsroom’s reputation?
  • What are the privacy implications of this use, and how will we protect user data?
  • Have we run this by the experts who create and maintain our privacy policy?  

Editorial use

Approved generative AI tools

Here is a list of tools that are currently approved for use at Suncoast Searchlight. Staff will contact AI committee chair Josh Salman with requests for new tools, and we will update the list pending an audit:

  • ChatGPTAcceptable to use for initial research, finding articles and sources, code assistance, copy editing and summarizing documents. No Suncoast Searchlight content should be entered in any way that can be used to train their model. 
  • GeminiAcceptable to use for initial research, finding articles and sources, code assistance, copy editing and summarizing documents. No Suncoast Searchlight content should be entered in any way that can be used to train their model. 
  • Claude — Acceptable to use for initial research, finding articles and sources, code assistance, copy editing and summarizing documents. No Suncoast Searchlight content should be entered in any way that can be used to train their model.
  • Notebook LMAcceptable to use for initial research, finding articles and sources, code assistance, copy editing and summarizing documents. No Suncoast Searchlight content should be entered in any way that can be used to train their model. 
  • OtterWe have a subscription to Otter for transcribing recorded interviews and meetings. We will not use Otter to transcribe a recorded interview of a confidential source.
  • Adobe FireflyAcceptable to use for searching and summarizing documents.

Any of the above tools can be used to create radio transcripts of our stories for use by our partners, with disclosure.

Entering our content

Suncoast Searchlight staff members can enter our content into the approved tools under the following conditions:

  • Content is entered into a model that cannot be used to train an AI agent.
  • Use of AI to transform our copy will respect the following values: 
    • Preserve our editorial voice: We will be cautious when using AI tools to edit content, ensuring that any changes maintain Suncoast Searchlight’s editorial voice and style guidelines.
    • Avoid full writes and rewrites: Generative AI tools will not be used for wholesale writing or rewriting of content. We will use them for specific edits rather than rewriting entire paragraphs or articles.
    • Proprietary content: We will not input any private or proprietary information, such as contracts, personnel information, email lists or sensitive correspondence into generative AI tools.
    • Verification: We will be mindful that generative AI tools may introduce errors, misinterpret context or suggest phrasing that unintentionally changes meaning, and staff journalists will review all AI suggestions critically to ensure accuracy.
    • Disclosure: In most cases, we will disclose the use of generative AI. Our goal is to be specific and highlight why we’re using the tool to better engage with readers.

Research

We may use generative AI to research a topic. This includes using chatbots to summarize academic papers and suggest others, surface historical information or data about the topic. Generative AI tools may be used to find checkable claims to pursue or to sift through social media posts for article topics.  A reminder: These tools are prone to factual errors, so all outputs will be verified by staff.

Transcription

We may use generative AI to transcribe interviews and make our reporting more efficient. Our journalists will review transcriptions and cross-check with recordings for any material to be used in articles or other content.

Translation

We may use generative AI tools to translate material for article research. We may also use those tools to translate article content to reach new audiences, which will always be reviewed by an expert in the language and include a disclosure. This does include automated tools available to the public on our website. 

Searching and assembling data

We may use AI to search for information, mine public databases or assemble and calculate statistics that would be useful to our reporting and in the service of our audience. Any data analysis and writing of code used on the website will be checked by a journalist with relevant data skills.

Headlines or search engine optimization

We may use generative AI tools to generate headlines or copy to help our content appear more prominently in search engines. We will put enough facts into the prompt that the headline is based on our journalism and not other reporting.

Summary paragraphs and repackaged content

AI tools may be used to summarize or repackage articles in other formats, such as radio scripts,  with full disclosure. 

Similarly, these tools may be used to summarize or repackage articles or other content for new editorial or teaching uses, such as in newsletters, quizzes or courses (see social media content for additional guidance). For repackaged content, we will include a disclosure.

Copy editing

Generative AI may be used as a tool to assist with copyediting tasks, such as identifying grammar issues, suggesting style improvements or rephrasing sentences for clarity.

Social media content

Generative AI tools can be used to summarize articles to create social media posts. For infographics, there will be a disclosure. To avoid label fatigue, we will not require labels for social media posts that include AI-generated summaries, as long as a staff journalist reviews content, and links to this policy in Linktree or other resource lists on social platforms. 

Visuals

Suncoast Searchlight holds AI-generated visuals to the same rigorous ethical standards as all forms of journalism. Because images shape perception instantly and powerfully, our use of generative AI in visual storytelling is governed by principles of truth, transparency and audience trust.

These guidelines apply to all AI-generated or AI-assisted visual materials, including illustrations, composites, animations and enhanced photographs. Every visual must serve a clear editorial purpose and uphold our responsibility to inform, not mislead. We will not use AI to generate illustrations. We may, however, use AI to generate graphics and charts.

Humanity first

When a scene can be documented ethically and accurately by our journalists, human coverage is the preferred option.

AI-generated visuals may only be used when:

  • They are essential to the audience’s understanding.
  • The image is impossible or inappropriate to obtain through traditional means.

Accuracy over aesthetics

AI photo enhancement tools (e.g., sharpening, lighting correction, denoising) must reflect reality, not dramatize or distort it, see AP’s guidelines on page 11 for photos. Edits that exaggerate emotion, alter mood or misrepresent the scene violate visual ethics. For example, deepening shadows to heighten drama in disaster imagery is not permitted. All enhancements must be disclosed internally and reviewed against the original.

Review and verification
Given the rise of AI generation tools for the public, editors and journalists must be vigilant about analyzing reader-submitted content. Media verification must rely on multiple methods — metadata checks, source verification, AI-assisted forensics — and never on one tool. Verification decisions must be documented internally for future review and accountability.

No manipulation of real people or events

We do not use AI to create or alter depictions of real people or places unless clearly disclosed and editorially justified. This includes recreating faces, changing expressions, or adding or removing individuals from scenes. We will not use AI to simulate likenesses of staff or sources in news reporting. 

Ongoing training

Regular training on AI tools and experiments will be available and at times even mandatory. This training will be delivered by a combination of members from the internal committee and outside experts.

Environmental impact

Suncoast Searchlight acknowledges the energy demands associated with training and deploying large-scale AI systems. As part of our commitment to sustainable journalism, we recognize that responsible AI use includes minimizing our environmental footprint.

We commit to prioritizing efficient tools and responsible usage. If possible we will use local models rather than cloud-based services.