This is a summary of an episode of Pioneers, an educational podcast on AI led by our founder. Join 2,000+ business leaders and AI enthusiasts and be the first to know when new episodes go live. Subscribe to our newsletter here.
TLDR:
- Artificial intelligence tools can address workplace diversity issues, but they often embed human biases so leaders need strategies to maximize benefits while mitigating risks.
- Effective AI depends on diverse teams and inclusive processes where leaders must ensure varied perspectives shape AI development to avoid replicating existing biases.
- Success with AI requires a culture prepared for safe innovation, characterized by transparency, collective review, and ethical collaboration with partners.
- Leaders should demand openness in AI development, scrutinizing training data and metrics to prevent reinforcing historical privileges and biases.
New AI tools seem full of promise for fixing workplace diversity problems. But they also bake in human biases that can make inclusion worse. So how do leaders tap the good while protecting against the bad?
Before we dive into all that Sharifah shared, be sure to check out the full episode here:
Meet Sharifah - AI and DEI Specialist
After 20 years of helping global banks, tech titans and startups do data right, IntentHQ’s Chief Product Officer, Sharifah Amirah, sat down with us in our latest episode of Pioneers to share her battle-tested tips. Her wisdom offers a simple leadership game plan for using AI systems to thoughtfully boost equity.
With about 20 years of experience in the tech sector, Sharifah’s work experience started with launching the first wireless broadband network.
Being a female in the tech industry in the early 2000s was a hard experience, which inspired her to get a gender degree at the London School of Economics, after which she worked with the UN and many NGOs.
Shortly after finishing PhD at Cambridge, she joined IntentHQ where they’re doing great things with AI. Sharifah’s degrees and tech background help her use technology to help handle artificial intelligence diversity and inclusion, and responsible use of AI tools.
People and Outlook Guide Purposeful AI
Artificial intelligence lives or dies based on the hands shaping it.
Who’s around the design table? What questions do they ask? Which ideas move ahead? Even proven apps link back to human choices made long before launch.
Leaders seeking AI that drives inclusion must purposefully build creative groups with different thinkers raising shared questions. Embracing unexpected voices surfaces hidden issues and sparks unforeseen solutions worth betting on.
Varied minds break groups out of comfy ruts where people like each other pontificate without questioning assumptions. Diverse perspectives pave the path for AI to assist excluded groups.
AI's Foundations Reveal Its Failings and Future
Sharifah says that using artificial intelligence to promote diversity is possible and it’s a key ingredient. However equal inclusion in process and power remains critical for avoiding uneven effects.
Sharifah reveals how early recruiting algorithms at tech titans just perfected the replication of narrow employee prototypes (i.e. privileged white men from elite colleges). This again exposes AI's foundations.
Systems optimizing historical practices, however harmful, require intentional intervention to propel new paradigm progress. Leaders must pursue "next" practices shaped by wide participation, not just discourage detrimental behaviors within lingering inequitable systems.
Doing differently means moving from tolerance toward total belonging and jointly creating what's next.
Out-of-the-box AI systems boost those already winning. If thoughtfully constructed, AI can overhaul systems so more people can meaningfully compete and prosper.
“There’s a lot of potential and I think the challenge is realizing that potential and how you introduce the technology to address gaps and biases, which might have existed for a long time because AI is training on data.” — Ankur Patel
Ankur explains that there’s no reason to go into a police state where you’re observing every single conversation or communication that you’re having within the company. However, AI can help when they’re properly used to identify where bias occurs so you can intervene.
Preparing Culture and Partners for Responsible Artificial Intelligence Diversity and Inclusion
Even well-intended AI flops without cultural readiness. Leaders must carefully nurture an appetite for safe innovation through habits and ecosystems before productively applying inclusive AI.
At data trailblazer IntentHQ, this means deep transparency and a regular collective review of existing dynamics. Self-reflection reveals blindspots while sparking ideas on possibilities. Such receptive culture intentionally spreads further through partners collaborating on AI guardrails.
Companionably investigating values and behaviors provides direction for AI usage stretching ethically across entire business ecosystems. Doing right expands through connections, not isolation.
Deconstructing AI's Black Box Blind Spots
With receptive cultures primed for responsible AI advancement, unraveling conventional code-driving systems represents the next frontier. Neglecting strict scrutiny exponentially expands discrimination through an accountability vacuum.
Sharifah mentions that leaders must insist technology creators open the black box illuminating AI's secret sauce. Transparency and rigorous inquiry are the best tools to eliminate bias, such as gender bias.
Dig into what training data got used, what inaccuracies got dismissed, and what yardsticks determine success. Don't blindly accept metrics that perpetuate historical privileges and maintain an unfair status quo.
Stay unsatisfied with value-neutral talk or altruistic happy talk. Demand simplicity on how AI gets built and chosen as helpful versus harmful. Sharifah recommends asking yourself some of the questions like:
- What worldview drove the framing of solutions judged appropriate?
- How did we fill gaps in limited, biased data plaguing systems?
- Did a diversity of stakeholders participate in shaping pilots or defining success?
Leaders will grow into stronger question-askers through exposure, further spurring inclusive evolution. However, set clear transparency expectations rather than reacting to undesirable impacts that become impossible to undo.
“We see some of the challenges with LLM today… If your output model looks biased and you’re part of the bias, you’re not really going to flag or correct that…” — Sharifah Amirah
Poisonous Past and Promising Pathways Forward
We’re still at sunrise for AI expanding opportunity and equity. While missteps litter the early days, consistent differences arise from diverse teams openly building applications tackling society’s biggest quandaries.
The future remains undetermined. Leaders constructing creative groups to intently interrogate black box AI substantially shape what opportunities tomorrow holds for whom. We must all lean into that duty.
So bringing together mixed minds to deeply question how AI gets built and used represents the first phase in seizing technology’s promise while avoiding its pitfalls. That grounds lasting progress.
What questions on employing AI to tackle inclusion challenges currently captivate your curiosity?
Start small but solidly to position expanding AI as an accelerating tool for equity as capabilities advance. Match high tech with high-trust teams to unleash AI’s multiplier effects for shared returns long into the future.