Making AI Tangible: Our AI Infrastructure Residency

Last year at Carnegie Mellon University Libraries, we launched an interesting role: Library AI Infrastructure Resident. Much of the conversation around libraries and AI tends to focus on literacy, openness, ethics, or critical reflection—all important dimensions. But I wanted to approach AI from a different vantage point.

How do we make AI pragmatic & tangible in the library context?

Put differently:

If we want to builddesign, or host AI projects—what does that actually require?

  • Technically?

  • Socially?

  • Operationally?

The residency was created to help us explore those questions and see what was really needed.

Another technical shift?

We've seen this kind of shift before. The rise of the web and Web 2.0, the mobile transformation, the emergence of digital humanities, digital publishing, makerspaces, media studios, and extended reality (AR/VR)—each wave has required us to explore, experiment, adapt, and reimagine how we integrate emerging technology and methods into our infrastructure and service portfolios.

With AI, the same is true, except it's even more complex. We’re not just talking about adopting tools or using chat platforms. We’re talking about infrastructure:

  • What are the social and technical requirements for AI projects?

  • What new skillsets are needed across the organization?

  • What policies, support models, and budget practices must evolve?

This isn't just about “hosting content.” We already know how to host digital archives, images, audio and videos, or interactive media. But hosting AI projects feels different. It requires planning for compute power, GPU access, APIs, subscription costs, data flow, storage, and all the while navigating ambiguity around licensing, governance, and responsibility.

We also noticed varying levels of AI interest across the organization. Some colleagues were ready to dive in, experienced coders, already experimenting. Others were intrigued but unsure where to begin. Some had datasets or ideas but didn’t know how to turn them into prototypes or projects.

We realized we had more questions than answers:

  • What infrastructure is required?

  • What support models can scale?

  • How do we encourage experimentation while setting realistic expectations?

  • How do we manage costs when usage-based pricing (like compute time) is unpredictable?

That’s where the idea for a residency came in.

The Role: AI Infrastructure Resident

We created a project-based residency to help us explore these unknowns in real time over a two-year period. The position is structured to:

  • Work across departments to identify AI-related needs, gaps, and opportunities

  • Prototype AI services and tools, both front-end and back-end

  • Explore digital infrastructure questions, especially those that would influence long-term investment decisions

  • Engage library staff—from early adopters to the AI-curious—to understand their interests, datasets, workflows, and challenges

We placed the residency within our Library Project Management Office (PMO)—not in IT, not in Operations, and not embedded with library faculty. That was a deliberate choice.

We wanted to pair technical exploration with organizational scaffolding —the kind of structure, support, and follow-through that helps ideas move from curiosity to implementation.

Ken Rose, our Director of the PMO, ensured the residency benefited from clear scoping, project planning, documentation, and pacing. It allowed us to turn ideas into viable, supported projects, and fostered inclusivity by inviting participation from across the library, regardless of technical background or where they resided on our org chart.

From Ken Rose: Project Management + AI: some lessons learned

Meet the Resident

Our AI Infrastructure Resident, Dom Jebbia, embraced the role with curiosity. He began by talking to as many people as possible—informally, conversationally—even if they didn’t have AI projects in mind. He used four key questions to start conversations. Below are those questions, along with a generalized summary of the responses he heard most often:

  1. What does the word "artificial intelligence" mean to you?
    Everything. Anything. Nothing.

  2. What about AI excites you?
    It will make my life easier. New tech. Feels like magic.

  3. What about AI scares you?
    It will take my job. Skynet/War Games. Deepfakes.

  4. What do you want AI to do for you?
    I want it to solve X problem.

That last one — solve for X — became a gateway. Often, the real opportunity wasn’t AI per se, but something adjacent: automating a workflow, visualizing data, creating new functionality, or improving decision-making. AI became a conversation starter, but the needs were broader. People were drawn to AI as a solution, but the underlying problem could often be addressed with simpler, existing technologies.

But the residency helped us understand what pragmatic AI really means in a library setting. From questions about APIs and FERPA compliance to the logistics of local model hosting, GPU access, and hardware support, the conversations quickly grew more complex. Licensing emerged as an unexpected tangle—who pays, how is it requested, can you be reimbursed, and how do you budget for compute power when costs are usage-based and unpredictable? Many of these questions didn’t have clear answers yet, but the residency gave us space to surface them, explore them, and start building the pathways we’ll need.

Lessons Learned (So Far)

Over the past year, Dom prototyped, tested, demoed, and blended technical experimentation with human-centered design. Some projects launched. Others remain in development. But the residency created momentum. It gave us a safe-to-fail space to learn what we don’t yet know.

Here are a few of Dom’s key takeaways:

  • Data acquisition and pre-processing is 90% of the work.

  • People love their data—and are protective of it.

  • There’s excitement and fear in equal measure.

  • Most AI use cases people bring up can often be solved through existing technologies and solid project management.

Why This Model Matters

The residency offered us:

  • An agile, time-bound role to explore problem spaces and identify achievable solutions

  • A way to work across teams and lower the barrier for AI exploration

  • A space to reexamine librarianship in the context of computational and sociotechnical change

  • A chance to support a transitioning workforce and prepare for an uncertain future

  • A foundation for developing evidence-based insights to inform budgeting, planning, and further investment in AI infrastructure and services

What’s Next?

The residency continues. Dom has several new projects in development, and I’ve encouraged him to share this work more broadly through demos, digital showcases, and conference presentations. Even at this early stage, one thing is clear: AI in libraries isn’t about chasing hype. It’s about building capacity. Not just technical capacity, but organizational readiness. Not just tools or advanced chat prompts, but developing a culture of experimentation.

Having a role like this—agile, exploratory, and deeply embedded—gave us a way to bridge strategy and operations, theory and practice, aspiration and implementation. It created space to ask different questions, test ideas in low-stakes ways, and build trust across departments. The residency offered psychological safety for staff to explore unfamiliar terrain, while also providing a technically fluent partner to help navigate complexity. In short, it helped us begin building not only the digital infrastructure for AI, but the social infrastructure too. That, to me, is the true value of the residency model: it doesn’t just spark innovation—it makes innovation tangible, inclusive, and real.

We shared some of these reflections today at the Generative AI in Libraries (GAIL) Virtual 2025 Conference. I’ll add a link to our talk and slides once they’re posted.

Next
Next

Designing Innovation: frameworks for making ideas actionable