The increased use of AI has sparked debates on its potential benefits versus possible harm across all industries. Concerns encompass machine learning expansion risk, discrimination, misinformation, and societal risk. When considering AI’s part to play in inclusion, there have been numerous instances of sexist robots and racist chatbots getting it terribly wrong.

Recent technological advancements have intensified AI’s prominence. The global AI market, valued at USD 136.55 billion in 2022, is projected to grow at a CAGR of 37.3% from 2023 to 2030, substantiating its importance. The regulators are taking note.

A CNN Business report highlighted a special AI session during a Senate Hearing at the US Capitol in September 2023, featuring civil society leaders, senators, and tech giants like X, Meta, and IBM. It gave them, “a significant opportunity to influence how lawmakers design the rules that could govern AI” and they advocated for policy, regulation, and stricter oversight.

Fintech has long employed automation, machine learning, and AI. I interviewed fintech professionals to gauge their AI usage, interest levels, and sentiments toward the AI trend. During our discussions, I asked, “Can AI be trusted with inclusion?”

AI For Fintech Good

Initial discussions on AI’s potential in financial services were positive, focusing on its substantial impact on fraud detection, revenue forecasting accuracy, informed lending decisions, and credit risk management. There was a palpable buzz about data quality, security, and compliance. Additionally, there was excitement regarding the estimate that chatbots alone could potentially save banks a staggering 7 billion USD this year.

The benefits of AI extended to education access, poverty reduction, improved healthcare, space and deep-water exploration, and metallurgical excavations. Rohan Handa, an ex-founding member of Horizen Lab Ventures, a Web3 advisory firm, emphasized AI’s potential to revolutionize data processing. He referred to it as “next frontier in terms of what we can do with our time and knowledge.”

Krishna Nadella, Head of Americas at Sigtech, a cloud-based quant technology company, expressed excitement about harnessing advanced analytics, predictive models, and data-driven insights. When addressing AI concerns, he likened its significance to a child who potentially fears swimming, but fails to realize they’re already in the water.”

Growing and Ongoing Concerns

Concerns about AI in finance are valid, as highlighted in Finance Magnates‘ article “The Role of AI in the Future of Fintech.” The article emphasizes the impact of inadequate representation across society and the creation of biased algorithms due to data and diversity issues.

Yael Malek, Chief People Officer at digital banking fintech Bluevine, expressed apprehensions regarding ethics, privacy, potential job losses, and the societal impact of “AI challenged by the fundamental human need for connection and belonging.” The July 2023 EY “CEO Outlook Pulse Survey” headline reported fears of ‘unknown consequences’. The survey revealed that 65% of CEOs believe more efforts are needed to address the social, ethical, and criminal risks associated with the AI-driven future, with a similar number feeling that insufficient action is being taken.

Unreliable Data Input Leads To Unreliable Output

When considering the ‘actions needed to be taken’, the conversation centered on collective responsibility for fair data and quality content. Shivina Kumar, Director of Brand and Communications at Compstak, noted that AI systems “inherit biases from input data”, leading directly to biased output. She emphasized that blaming AI for highlighting negative content without considering the positive is short sighted, as AI is an accumulation of human generated content. Thus, people should avoid discrimination in their content creation and sharing, promoting inclusivity over exclusivity. She concluded technologists should pay increased attention to codifying inclusion and mitigating bias directly in AI models to counteract human-level biases that exist today.

Billie Miric, Senior Director for Product and Revenue Strategy at Vertex Inc, a tax-focused fintech, questioned the diversity within groups solving AI challenges. Yael echoed this sentiment, advocating AI companies to prioritize diverse hiring and foster inclusion. She explained that diverse input data from a diverse workforce can lead to more inclusive data output. As Chief People Officer, she stressed the importance of equipping a diverse workforce with bias training, authentic actions, and cultural awareness to support non-judgmental data input.

Appropriate Governance

Media outlets are actively calling for proper governance, but questions persist: government, corporate, or both? Who bears responsibility for the future, and could AI spiral beyond control? The recent Senate AI Session is just the first of 9 steps toward addressing these concerns. Will the room’s occupants hold all the answers? A.M. Bhatt, CEO of DAE, an educational nonprofit aiming for social and economic justice, emphasized that ethical application depends on society. He said, “in the hands of an ethical society, all tools can be ethically applied” while unethical hands will misuse them, regardless of constraints.

Yael pointed out that technology shaped by humans carries embedded biases. Shivina’s suggestions for fintechs include user testing, feedback loops, and multiple iterations to foster flexible growth. Achieving AI inclusion might require diverse perspectives, making a “single barometer” challenging.

The Power of Imagination

Rohan identifies the central challenge as the “black box” between data input and output, encompassing decision-making, options assessment, and gut instincts.

Yael also emphasizes that AI won’t replace the empathy and critical thinking necessary for success. Krishna adds that AI can automate tasks and save time but lacks the power to create, “Where AI ends and the human condition begins in our ability to imagine.” Discrimination in AI arises from overlooked datasets and underappreciated perspectives. AI can promote inclusivity if directed toward that goal. Yael suggests using AI to detect biases, especially concerning marginalized populations, “fostering ethical and responsible behavior”. Shivina highlights the growing demand for AI transparency. From developers to marketers, people want to “look under the hood” and understand how things are built or conclusions are reached. This should further the need for more inclusive practises.

Will AI Be Used For Inclusion?

Krishna advocates personal learning within AI, suggesting staying updated, engaging in professional networks, and evaluating your tech stack for continuous learning. He views AI as a “transformation journey” and advises considering what you want to achieve with the tool rather than “what the tool will do to you”. The potential for amplifying human learning, growth and efficiency across the fintech space is staggering.

Billie, expresses excitement about diversity, equity, and its inclusion. Instead of questioning whether AI can be trusted with inclusion, we need to take responsibility to guide and govern it in a way that aligns with our objectives. She urges us to think of it as yet another tool that can help advance our goals.

Read the full article here

Share.
Exit mobile version