Sunday, September 29, 2024

GenAI First Steps — Use Case Selection is Key. How to Pick a Winner.

 GenAI has been on my mind lately, as it has for many data leaders. This piece struck a chord with me in the way it considered the early steps toward GenAI adoption.

I particularly like the approach of describing those early moves as experimentation and controlled experimentation at that. That’s a powerful way of putting it; it implicitly conveys the need to proceed with due caution to reap the benefits. It is indeed a balance between risk and reward, between caution and speed.

These early steps toward GenAI are a delicate balancing act—not just for organisations but also for the reputations of those who lead their data functions.


Generated with AI. September 19 2024 11:26PM


For me, a well-considered choice of use cases is critical. Pick the right few to start the journey:

  • Ones that don’t reach too far and won’t see the light of day without a squadron of consultants
  • Ones that recognise and can take advantage of the (actual) state of readiness of your data estate (and associated tech)
  • Ones that don’t carry too much downside risk if things go wrong
  • But, equally, don’t choose those that only deliver benefits so small (or easily achievable by other means) that the outcome is seen as inconsequential

You need to signal you’re moving to keep key stakeholders happy and stop a raft of parallel fragmented activity. But more than that, there’s also an excellent opportunity to act to control a risk that is emerging (already manifesting) across a much larger stakeholder cohort. That’s staff rushing to embrace the productivity boost they see GenAI can give them but not thinking about / knowing about / caring about the risk that may bring. Find an early use case that helps staff do the right thing rather than feeling their only choice is between reaching outside or doing nothing.

Choosing the first GenAI use cases well can also offer a means to deal with this risk. Before corporate (or customers’) data goes outside organisational walls or a hallucination is blindly acted upon unchecked.

And then there’s the question of AI governance.

- When should we introduce guide rails vs. guiding principles vs. hard and fast standards and robust, comprehensive frameworks?

- How much is too much? And how will we know when that point has been eclipsed?

- Can this governance development work as a parallel activity rather than something that needs to be landed first?

Pragmatically, I see the need to implement “enough” governance while actively watching for unchecked behaviour or people seeming mired and unable to move. We must also be prepared to learn, iterate, and adjust quickly.

This brings us back to one more criteria for use case choice—getting those right (or at least right enough) is critical!