- Compound Leverage
- Posts
- How to Apply the Rule of 72 to AI Automation
How to Apply the Rule of 72 to AI Automation
Combine the Rule of 72 and AI Automation to outsize the impact of your automations
Compound Leverage
Issue #007
Stoic Wisdom of the Day
“The impediment to action advances action, what stands in the way becomes the way."
- Marcus Aurelius
With AI, the possibilities to achieve extraordinary gains can be overwhelming. Today, I want to share a tip on using AI for doubling results, guided by the timeless financial principle known as the Rule of 72.
What is the Rule of 72?
The Rule of 72 is a simple formula used in finance to estimate the years required to double an investment at a fixed annual rate of return. By dividing 72 by your annual growth rate, you get a rough estimate of how long it will take for your investment to grow twofold.
Now, let’s apply it to an experiment to understand how to use it in the real world; the experiment I created to test this was measuring marketing campaign performance to determine the rate of doubling over days instead of years.
Let’s walk through the process.
Identify a specific task
Before you can leverage AI to speed the pace of doubling, define a specific task with a significant impact. To figure this task, consider the following questions:
Where are people doing manual work?
Look for areas where manual tasks consume significant time and resources in your business. These tasks are often repetitive and can be streamlined through automation.
Where are the workflows happening?
Identify the workflows that are central to your business operations. Understanding these workflows will help you pinpoint where AI can enhance efficiency and productivity.
What is the reporting around the workflows?
Assess the current reporting mechanisms for these workflows. This includes the collected data, its analysis, and how the insights are used. Effective reporting is crucial for measuring the impact of any AI-driven changes.
Where are the paper workflows happening?
Consider areas where traditional paper-based workflows are still in use. These workflows are prime candidates for digital transformation through AI, which would increase speed and accuracy.
Real-World Example: Email Marketing Campaign Analysis
I identified a specific task I spent too much time on for my experiment. This task involved transferring email marketing campaign data from a Google Sheet into another Google Sheet. The goal was to determine the campaigns' effectiveness, assess patterns, identify trends, and understand how they tied into my sales meeting goals.
By pinpointing this manual, time-consuming task, I realized that automating this process could save significant time and provide faster, more accurate insights. This task was chosen because it directly impacted my ability to make data-driven decisions and align my marketing efforts with sales objectives.
Identifying and automating such tasks improves efficiency and allows for more strategic use of time, focusing on high-value activities that drive growth.
Establish a baseline
Before making any significant changes, it's essential to understand where you currently stand. Establishing a baseline involves looking at your current workflows, identifying existing problems, and setting clear goals. Here’s how you can break it down:
Workflow Area Sub-Category:
Identify the specific sub-category of the workflow you are focusing on. In my experiment, the sub-category was “Email Marketing Campaign Analysis.”
Workflow Area:
Define the broader workflow area under which this sub-category falls. In this case, it would be “Marketing Analytics.”
Status:
Assess the workflow task's current status. Describe how it is performed, including the tools and methods used. For example, it manually transfers data between Google Sheets.
Workflow Problem:
Identify the main problems with the current workflow. These might include time consumption, error rates, inefficiencies, or bottlenecks. For example, I spent too much time on manual data entry and analysis.
Workflow Goal:
Set a clear goal for optimizing this workflow. This could be reducing the time spent, improving accuracy, or generating more insights. For example, aim to reduce the data transfer and analysis time by 50%.
Proposed Solution:
Outline the solution you propose to address the workflow problem. This could involve implementing AI tools, automating tasks, or adopting new software. For example, an AI-powered tool can automate data extraction and analysis.
OKR Strategy:
Develop an OKR (Objectives and Key Results) strategy to guide your efforts. This strategy should align with your business objectives and provide a clear framework for measuring success.
OKRs:
Define specific OKR Key Result the workflow task supports. For instance:
Key Result 1: Increase sales meetings to 10 per week from cold and warm email campaigns by the end of Q3 2024.
Baseline Workflow Steps:
Document the steps involved in your current workflow. This helps in comparing the before and after states. For example:
Extract data from Google Sheet A.
Transfer data to Google Sheet B.
Analyze data for patterns and trends.
Compile reports.
Weekly Baseline Cost:
Calculate the weekly cost of maintaining the current workflow. This includes labor, software, and other relevant expenses, such as time spent on manual data entry.
Baseline Scale:
Determine the current scale of the workflow. This could be the volume of data processed, the number of campaigns analyzed, etc. For example, it analyzes five campaigns per week.
Baseline Time:
Measure the total time spent on the workflow in its current state. This helps in assessing the impact of any changes made. For example, 10 hours per week spent on data transfer and analysis.
Establishing this detailed baseline creates a comprehensive picture of your current status. This sets the stage for measuring the impact of any improvements you implement and ensures that your goals are clear and achievable.
Setup the AI hypothesis
In this step, you set up the experiment to test the proposed solution. In my case, I decided to build a GPT called Thinker Analyzer that acted as a marketing campaign data scientist, performing all the tasks I was previously doing manually. Here’s how you can document the steps involved in setting up the hypothesis:
New Workflow Steps:
Define the new steps involved in the optimized workflow using the AI tool. For example:
Thinker Analyzer extracts data from marketing platforms.
Thinker Analyzer processes and analyzes the data.
Thinker Analyzer generates performance reports and insights.
AI Implementation:
Describe how the AI tool is implemented. This includes setup, configuration, and integration with existing systems. For example, Thinker Analyzer can be integrated with Google Sheets and email marketing platforms.
Initial Results:
Document the initial outcomes after implementing the AI tool. This includes any immediate improvements or challenges faced. For example, initial results showed a 30% reduction in data processing time.
Iterations:
Note any adjustments to the AI tool or workflow based on initial results. Iterations are crucial for refining the process, such as tweaking Thinker Analyzer’s data processing algorithms to improve accuracy.
Final Results:
Summarize the outcomes after completing the iterations. This should highlight the overall impact of the AI implementation. For example, the final results indicated a 50% reduction in data transfer time and a 40% increase in report accuracy.
Number of Steps:
Compare the number of steps in the old workflow versus the new AI-powered workflow. For example, you are reducing from 5 manual to 3 automated steps.
Number of Tasks:
Identify the number of tasks involved in each workflow. For example, you are reducing from 8 manual tasks to 4 automated tasks.
Scale:
Assess the scale at which the new workflow operates. This could be the volume of emails or campaigns analyzed, such as increasing from analyzing 5 campaigns per week to 10.
Complexity:
Evaluate the complexity of the new workflow compared to the old one. For example, you are reducing the complexity by eliminating manual data entry.
Time to Execute:
Measure the time required to complete the workflow in the old and new setups. For example, we are reducing execution time from 10 hours per week to 5 hours.
Manual Steps:
Document the number of manual steps eliminated by the AI implementation. For example, you are reducing from 4 manual steps to 1.
Manual Tasks:
Count the number of manual tasks replaced by AI. For example, you are reducing from 6 manual tasks to 2.
People Involved:
Note the reduction in human involvement. For example, you are reducing from 2 people required to 1 overseeing the AI.
Dependencies:
Identify any dependencies that have been removed or simplified. For example, it removes the dependency on manual data entry.
Volume:
Measure the increase in volume handled by the new workflow. For example, you are doubling the volume of data processed.
Revenue Generated:
Calculate the revenue generated due to the improved workflow—for example, a 20% increase in sales conversions due to faster and more accurate data analysis.
Cost Savings:
Assess the cost savings achieved through AI implementation. For example, we are saving $500 per week in labor costs.
Setting up this detailed hypothesis creates a structured framework for testing and validating AI's impact on the workflow task. This ensures your experiment is well-documented and provides clear insights into the benefits and improvements achieved.
Build the hypothesis and measure the results
In this step, you build a hypothesis for your experiment and measure the results. Here's how I approached it with my experiment:
Hypothesis
I decided to build a GPT called Thinker Analyzer to perform the following tasks:
Act as a Data Scientist: Thinker Analyzer was trained to analyze campaign data, mimicking my process and leveraging my knowledge.
Sentiment Analysis: It performed sentiment analysis to measure doubling rates and effects.
Apply the Rule of 72: The analyzer was programmed to understand and apply the Rule of 72 to the data.
Report Generation: It provided detailed reports and insights on the results.
AI Action Integration: For certain types of analysis, I integrated Claude, another AI model, to ensure the best language model was used for each statistical task.
Target Volume: The analyzer also provided a target volume to reach for the experiment.
Running the Experiment
Set Up a Marketing Campaign:
I established a marketing campaign for email designed to utilize the Thinker Analyzer's capabilities.
Compare the Before:
Before running the experiment, I documented the current state of my marketing campaign analysis to serve as a baseline.
Run the Experiment:
I ran the email marketing campaign and utilized Thinker Analyzer to perform all the tasks I previously handled manually. The experiment reduced the time spent on data analysis from around 1.30 hours to just a few minutes.
Measure the Results:
After running the experiment, I compared the results to the baseline to assess the impact of the Thinker Analyzer.
Results After Iteration 1
Workflow Efficiency:
The Thinker Analyzer significantly reduced the time spent on data analysis from 1.50 hours to just a few minutes.
Sentiment Analysis Accuracy:
The accuracy of sentiment analysis improved by 30%, leading to more precise insights and better decision-making.
Application of Rule of 72:
Thinker Analyzer successfully applied the Rule of 72, providing precise estimates on doubling rates, which helped set realistic growth targets.
Volume and Scale:
The volume of data processed doubled, allowing for analysis of 10 campaigns per week instead of 5.
Report Quality:
The quality and comprehensiveness of the reports improved, providing deeper insights and actionable recommendations.
Cost Savings:
The experiment resulted in cost savings of approximately $500 per week in labor costs.
Enhanced Focus and Learning:
Now, I had an easy way to see what was working and what was not, which allowed me to focus on modifying the experiments to increase engagement and modify my offer, headlines, and email copy.
Google Sheets Integration:
I set up a Google Sheet and pasted the data into tabs labeled Test#_Date. Using the GMASS email automation tool, I copied and pasted the test data into the analyzer with a prompt, and voilà, I had an AI experiment—it’s that easy.
Key Learnings
Best Engagement with Warm Leads:
The analysis revealed that the highest engagement rates were with warm leads.
Effective Offers:
The most successful offers tied a personalized desire to the product or service. When this strategy was used, conversion was extremely high.
I saw the benefits of implementing Thinker Analyzer by building this hypothesis and measuring the results. The experiment demonstrated the potential for significant time and cost savings and highlighted the impact on workflow efficiency and the ability to refine marketing strategies based on detailed insights. It removes the guesswork from what to double down on and scale.
Closing Thoughts
Applying the Rule of 72 to your next AI experiment will help you focus and use data to determine whether it scales doubling effects. By leveraging this principle, you can systematically measure the impact of your AI implementations.
Visit our website's Tools page to see some of the free GPT tools we've published.
Marvin
Here are three ways we can support your doubling with AI journey whenever ready.
Doubling with AI Coaching Session: For $200, book a live 1-on-1 session for quick win identification, goal setting, and AI integration.
Start Doubling Your Proposal Output and Quality with AI: Get a free AI Proposal Checklist.
Participate in the 30 Days to Doubling Program: Identify and Implement Improvements to Double Output with AI. Learn more.
Reply