Tom Hardin spent the early part of his career as a hedge fund analyst in the United States. In 2008, he entered into a cooperation agreement with the Department of Justice and played a key role in helping the U.S. government uncover how insider trading operated within the financial services sector. Under the codename âTipper X,â Tom became one of the most active informants in securities fraud history, contributing to more than 20 of the 80+ convictions in âOperation Perfect Hedgeââa sweeping crackdown on Wall Street that became the largest insider trading investigation of its time. After resolving his case in 2015, Tom was invited by the FBIâs New York City office to speak to its rookie agent class. Today, he is a sought-after global speaker and corporate trainer, known for offering a unique frontline perspective on conduct risk, behavioral ethics, and compliance. His memoir, âWired on Wall Street,â will be published by Wiley in 2026. He is a member of Interfor Academy.
If you missed Part 1 of the interview with Tom click here.Â
Recently, youâve been speaking at conferences about AI and compliance. What are the biggest takeaways from these conversations?
The big one is this: AI wonât commit fraud. People using AI might. The behavioral risks are still the same. People can still rationalizeââThe system recommended it.â They can isolateââNo human touched this decision.â And AI can even amplify peer influence by normalizing risky behavior through automation. Whatâs changed is the scale and speed. A bad call used to affect one client. Now, it can ripple across an entire organization in milliseconds. And it might be invisibleâembedded in code, tucked into algorithms no one reviews. So, my call to compliance teams is simple:
- Stay human. You canât outsource judgment to a model.
- Stay vigilant. Monitor not just what AI does, but what it enables.
- Stay ahead. Ethics should be part of the design phaseânot an afterthought.
Itâs not about fearing AIâitâs about making sure the human element stays in the loop.
You talked a bit about the risks for AI and compliance â what are some ways that AI might be utilized positively in this area?
Itâs easy to focus on the risks of AI, and theyâre real, but Iâm actually really optimistic about how AI can strengthen compliance and ethics if it’s used the right way.
One of the biggest challenges in compliance is scale. Youâve got global teams, thousands of emails, chats, transactions happening daily and humans just canât monitor all of it in real time. Thatâs where AI can step in. It can spot patterns, flag anomalies, and detect behaviors that might suggest misconduct, way before a human would ever notice. Take something like natural language processing. You can use that to monitor email and chat traffic. Not to spy, but to flag potential issues: aggressive sales tactics, veiled pressure, even signs of rationalization language like âjust this onceâ or âoff the record.â Those are early warning signals, and AI can surface them without waiting for a crisis.
Another area is decision support. AI can help reduce ambiguity by offering ethical prompts or reminders at key moments, almost like a digital conscience. Imagine a system that, before approving a high-risk transaction, asks, âHave you reviewed the compliance risks?â or reminds you of your fiduciary duty. That nudge can make a big difference.
But AI is just a tool. It reflects the values of the people who build and deploy it. So, the companies that are really getting this right are the ones designing AI systems with ethical foresight from the beginning. Theyâre asking not just, “Can this be done?” but “Should this be done?” AI can absolutely help compliance and ethics. But only if we stay human in how we use it. The goal isnât to replace judgmentâitâs to enhance it.
Youâve commented on incentives in education driving cheating. How does that apply to business?
Itâs the same pattern, different domain. In education, if all we care about is the GPA, kids will cheat to get it. In business, if all we care about is hitting quota, people will game the systemâor worse. The trick is to build incentives that reward ethical decision-making, not just performance. That means recognizing the person who says ânoâ to a bad deal. It means measuring how results are achieved, not just the results themselves. One thing I challenge leaders with is this: What do you actually celebrate? If you only promote the rainmakers, regardless of how they get there, thatâs the message your culture hears. Incentives are loud. If they donât match your values, the values loseâevery time.
Any final thoughts on where security, compliance, and culture intersect when it comes to a companyâs risk profile?
In security, you assess risks before they hit. Culture and conduct risk should be treated no differently. The challenge is that cultural risks are often invisible until they manifest as ethical breaches, misconduct, or reputational damage. But just like in security, there are telltale warning signs if you know where to look:
Listening at the edges â anonymous employee surveys are just the start. Monitoring feedback from exit interviews, ethics hotline data, Glassdoor reviews, or informal “culture walkabouts” can reveal disconnects between stated values and real behavior.
Mapping pressure points â teams under constant performance pressure, unclear incentives, or leadership churn are more prone to rationalizing unethical decisions.
Analyzing decision-making patterns â are employees escalating gray-area issues or is silence the norm? Tracking how decisions are made and communicated tells you whether people feel safe speaking up.
Looking at âmicro-behaviorsâ â things like meeting dynamics, email tone, or how people respond to mistakes can be leading indicators of whether trust and transparency actually exist.
Culture isnât soft â itâs measurable. But it requires a mindset shift: donât just wait for a scandal. Assess culture like you assess cyber risk: continuously, systematically, and with input from every layer of the organization.
Interested in having Tom speak at an event? Reach out to Interfor Academy at Academy@interfor.international
To find out more, please reach out to info@interforinternational.com