Building a Company Policy for Using Generative AI

Recently, I tuned into the insightful TED podcast, "Worklife with Adam Grant". In the episode called “The Real Reason You Procrastinate” (at timestamp 12:56), Grant delved into the psychological tug-of-war between our "want-self" and "should-self". The "want-self" encourages us to live in the moment and savor immediate pleasures, while the "should-self" propels us towards achieving long-term goals. Striking a balance between these two aspects is crucial for enjoying life's pleasures while still working towards our objectives.

In the realm of recent technological advancements, nothing has thrown this delicate balance into sharper contrast than the unveiling of OpenAI’s ChatGPT and Google’s Bard. These tools, with their ability to generate relevant content for a wide array of roles and professions, have, to put it mildly, caused quite a stir. The sheer potential of these technologies has made navigating the "want-self" and "should-self" more challenging than ever.

You need an Artificial General Intelligence (AGI) policy. Now. It's all about balancing our "want-self" and "should-self" in the AI world. Sure, we're all excited to play with these cool AI tools - they have enormous potential to boost creativity and productivity across every knowledge worker. But we can't ignore the need for clear guidelines. Banning them? That's not the answer. It only breeds mistrust and puts us on the back foot against our competitors. Instead, we need to embrace these tools responsibly. So, an AGI policy isn't just about playing safe. It's about finding the sweet spot between our thirst for innovation and the need for responsible use.

Below is a set of steps that we at Ingage used to collaboratively build an AGI policy that searches for that sweet spot.

  1. Form a team. This team should include representatives from a variety of departments. We pulled together a multi-functional task force that brought in expertise across Ingage’s broad spectrum of business management and software development capabilities, with the forethought of ensuring the natural viewpoints of this team will naturally balance a healthy discussion between benefits and potential risks.  We also brought in expertise from the executive and human resources teams, who provided guidance on what should be considered a statement of policy (what we can and cannot do as Ingage employees), and what constitutes a best-practice (a living set of guidelines driven by professional experience and research).

  2. Do your research. Understand the different types of generative AI, the potential benefits and risks of generative AI, and the ethical considerations involved in using these powerful tools. Each team member needs to come to the table with an analysis of how the usage of generative AI tools could impact their respective functions. Assess the areas where the organizational risks outweigh the potential benefits, and vice versa.  This analysis will be the basis for building guardrails that will be discussed in the following step.

  3. Propose and debate policy elements. The Ingage task force debated each other on each of the topics and circumstances derived from our collective research. Staying true to our company’s core values, every topic was considered in conversations that focused on that balance between “want-self” and “should-self”. Each policy element needed to be clear, concise, and easy to understand, which is where guidance from HR was helpful. The final goal is for a policy that’s specific enough to guide employees in their use of generative AI, but flexible enough to allow for innovation.

  4. Communicate the policy. At Ingage, we introduced the policy within our weekly newsletter, and incorporated the material within the online handbook.  We’ll also have a training session at an upcoming company meeting that provides an open space for questions and feedback.

  5. Plan for a regular review.  The generative AI space is changing faster than any of us can keep up with.  Legal challenges are being presented in court, bureaucratic offices are publishing positions and concerns, and lawmakers have barely scratched the surface on considering the federal and global implications of using these powerful tools.  At Ingage, we plan to revisit our policy and best practices every six months as a standard cadence.

At the end of our self-defined two-week deadline, we arrived at the below policy - a clear outline of what is required or prohibited when considering the use of generative AI capabilities for internal and external client work.  It balances the potential for innovation with the need for transparency, confidentiality, and integrity.  

Artificial General Intelligence (AGI) Usage Policy

As a company that embraces innovation and cutting-edge technology, Ingage recognizes the potential of Artificial General Intelligence (AGI) tools to enhance our work processes, improve efficiency, and deliver added value to our clients. This policy outlines guidelines for responsible and ethical AGI usage while ensuring compliance with confidentiality, integrity, and legal requirements.

We’re sharing our current policy with you to offer insight on what you might consider as you develop your own organization’s journey in using Generative AI.  That said, using this material without a healthy set of guided conversations will not get you the right guardrails for your organization.  This field is also rapidly evolving, and we expect to review and update our policy regularly as we learn more - but we will not be updating this article. Our task force was engaged in so many valuable debates that helped us arrive at this simple-but-clear policy. 

At Ingage, we are more than happy to share those experiences with you, and help guide you through your journey to a clear generative AI usage policy. If you're interested in learning more about our latest policy and AGI practices, reach out to us at interested@ingagepartners.com.