ClearTech Loop: In the Know, On the Move

The CISO’s Job in AI is Not to Stop the Wave But to Shape It

January 27, 2026
CISO's job in AI is not to stop the wave but to shape it image

Download Transcript (pdf)

A Conversation with Travis Farral, CISO at Archaea Energy 

AI did not arrive in enterprises through a single decision. It crept in. 

It showed up inside productivity tools. It appeared in cloud platforms. It was quietly added to security products, analytics engines, and SaaS applications that teams were already using. 

Most organizations did not decide to “adopt AI.” They woke up one day and realized it was already there. That is the environment CISOs are operating in right now.  

In this episode of ClearTech Loop, Jo Peterson sits down with Travis Farral, Vice President and Chief Information Security Officer at Archaea Energy, to talk about what that reality means for security leaders who are being asked to govern technologies that are still evolving in real time.  

“This is not something that we’re going to be able to stop,” Travis said. “Even if we wanted to. It’s like standing in front of a tidal wave.”

🎧 Listen to the full episode in the player above

📬 Stay in the Loop subscribe for new episodes 
https://www.linkedin.com/newsletters/7346174860760416256/ 

This conversation focuses on three leadership realities CISOs are running into right now. 

First, AI is not one thing. The label hides what is actually being deployed, which makes governance difficult.  

Second, the AI threat model is shifting. Risks now show up in training data, model behavior, and prompt driven interfaces, not just traditional exploits and controls.  

Third, frameworks exist. The gap is fluency. CISOs need practical understanding of the guidance already available from organizations like NIST, OWASP, and MITRE so they can translate it into operating guardrails.  

 

Key Perspectives

AI is not one thing and that is the first risk 

Travis makes a clear point that “AI” has become a catch-all term that hides what organizations are actually deploying. 

“It does mean different things to different people. It could be something driven by a large language model. It could be natural language processing. Or it could be machine learning that’s been around for a lot longer, but companies will lump those things together.”  

For CISOs, that ambiguity is dangerous. You cannot govern what you cannot define, especially as more platforms quietly turn on AI features.  

The threat model is shifting 

AI does not just introduce new tools. It introduces new places where things can go wrong. 

Some risks live in the training data. “You can poison that and produce problems later on,” Travis warned. Others show up in the prompts themselves, the interfaces that tell models what to do.  

While some of this looks familiar, like privilege escalation or data theft, the mechanics are different. 

“The ways and the interfaces and mechanisms with which these things are done are new,” he said. “They’re novel to us.”  

Security is no longer just about protecting systems. It is about protecting how models reason, how data is combined, and how decisions are produced.  

Frameworks exist. Fluency is the gap

What stood out in this conversation is how pragmatic Travis is about where CISOs should start. 

“There are a lot of great partners… OWASP and MITRE and NIST,” he said. “They have frameworks that spell out a lot of the issues and the risks that are inherent in these technologies.”  

The problem is not that guidance is missing. The problem is that most teams have not yet built the fluency to apply it effectively in an AI driven environment.  

Why the CISO cannot be the Department of No 

AI adoption is not something organizations can slow down. AI capabilities are being added into platforms continuously, sometimes without a purchase decision or a formal rollout. 

“This is not something that we’re going to be able to stop,” Travis said. “Even if we wanted to. It’s like standing in front of a tidal wave.”  

The job of the CISO is not to block it. It is to define what is acceptable and what is not, then put guardrails around those decisions. 

“We need to become very familiar with the rules of the road. What things are we going to be okay with. What things are we not going to be okay with. And how can we put some guardrails around that.”  

Policies alone will not be enough. Organizations will need DLP, CASB, and AI specific security controls to put protective barriers in place around data and models.  

My take: The AI Tidal wave is Still Coming  

What I appreciated in this conversation is how familiar the leadership challenge actually is. 

Security has always had to adapt to technology that moves faster than governance. AI compresses the timeline and raises the stakes. The wave is already here. The organizations that will do well are the ones whose CISOs understand how to shape it, not pretend it can be held back.  

About the Guest | Travis Farral 

Travis Farral is Vice President and Chief Information Security Officer at Archaea Energy, where he leads cybersecurity and risk across a rapidly evolving energy environment. 

He has held senior security leadership roles at XTO Energy, ExxonMobil, Critical Start, and LEO Cyber Security, with experience spanning cloud, threat detection, incident response, and operational technology.  

Additional Resources

  • MITRE ATLAS MITRE’s Adversarial Threat Landscape for AI is one of the frameworks Travis referenced when he talked about how attacks against models are different from traditional exploits. https://atlas.mitre.org/ 
  • ClearTech Loop with Dutch Schwartz Travis’s comments about guardrails, controls, and not being the Department of No connect directly to Dutch’s episode on pragmatic AI safety. https://cleartechresearch.com/bumpers-not-brakes/