BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Generative AI: Shaping a New Future for Fraud Prevention, by Neha Narkhede at QCon San Francisco

Generative AI: Shaping a New Future for Fraud Prevention, by Neha Narkhede at QCon San Francisco

At the recent QCon San Francisco conference, Neha Narkhede gave a keynote on how generative AI can help improve the state of the art in fraud prevention. She discussed the current methods to detect and prevent fraud and in what way they are lacking right now. She discussed the so-called "knowledge fabric", which can capture all information and knowledge on current fraud methods. She also introduced six foundational pillars of AI Risk Decisioning: a 360-degree knowledge fabric, a natural language interface, auto recommendations, human-understandable reasoning, augmenting risk experts, and risk automation. 

Evolution of Fraud Detection

Narkhede started by explaining the historical technical approach of fraud detection. She divided this into three generations that characterize the common approaches used. Note that each generation is learning from the pros and cons of its predecessor.

The first generation was rule-based, following the "if-this-then-that" principle. Despite being easily manageable, this model quickly reaches its limits in complex situations. The second generation couples such rule-based systems with traditional machine learning, enabling dealing with high-dimensional data. However, this is data and time-exhaustive. The third generation, which she introduced during the talk, is utilizing generative AI in synergy with traditional machine learning. This can enable superior fraud detection by recognizing complex and evolving fraudulent patterns and significantly reducing false positives.

It should also be noted that technological advances are increasing the amount of fraud currently seen in the industry. Both automation on the side of the fraudsters and companies who have to balance customer friction and fraud losses help fraudsters improve their methods. The currently fastest-growing fraud trend is synthetic identity fraud. 

Existing Methods

Previous generation detectors have inherent defects. Data imbalance is a major issue for machine learning methods, and both rule-based and machine learning-based methods can lack context, hampering fraud detection. Substantial human intervention is often necessary, and continuous rule adjustments are needed to deal with the adaptive nature of fraud. Although models can be retrained quickly, the absence of continuous learning is a significant challenge to keeping models aware of the new trends in fraud. 

All this leads to the biggest problem: the limited scalability of said models. As transactions become increasingly complex, it's hard to scale to it. Narkhede mentioned that the field's current state is that it sometimes takes weeks to adapt to a new observed pattern. This is due to a lot of manual feature engineering, which may not capture all relevant information for accurate fraud detection. Human oversight is often needed for model tuning, model updates, and verification of flagged transactions.

Generative AI for Fraud Detection

Narkhede proposed Generative AI as a leap forward in the fraud detection domain. GenAI can foster adaptive learning and data augmentation, and can handle diverse data sets and incorporate real-time world fraud-related knowledge. Furthermore, generative AI reduces false positives and advances precision through sophisticated algorithms. She proposed six pillars where GenAI could help, leading to four improvements.

The first improvement is adaptive learning. Methods can continuously learn from the latest transactions, which allows them to adapt to new patterns. Especially considering human feedback, engineers can enhance model accuracy over time. However, without human oversight, these methods already provide privacy-compliant benefits. Their precision would allow a big reduction in false positives. 

Part of the presentation was a demo Narkhede presented of how an interaction with a Generative AI agent could look like. By simply chatting with an agent, an engineer could create and edit a risk flow. She also proposed that the agent could highlight similar cases when a case was discussed, such that larger amounts of fraud could be blocked at the same time. 

Conclusion: AI risk decisioning

Narkhede summarized the keynote by stating that a co-pilot could help humans make fraud detection decisions. A traditional approach would require manually reviewing transaction data, comparing it with known fraud patterns and investigating associated entities. This is a time-consuming and error-prone process. However, with a co-pilot to analyze this, the model could explain why certain transactions are fraud-prone and can learn from minimal examples to identify emerging trends.

In conclusion, a large impact can be made by improving the efficiency and accuracy of risk decisions, thereby dramatically reducing human effort and establishing scalable fraud and risk programs to speed up risk operations from weeks to hours. 


 

About the Author

Rate this Article

Adoption
Style

BT