All Collections
Quick Guides
Ignition & Artificial Intelligence
Ignition & Artificial Intelligence
C
Written by Chris Berkley
Updated over a week ago

What are the use cases for AI within Ignition?

Ignition uses AI across the platform. Here are three core areas:

  • Competitive intelligence: Ignition uses AI to auto-generate competitive battlecards by scraping publicly available data and aggregating/summarizing it to help product & sales teams. We generate firmographic data, SEO data, reviews, and talking points to address strengths/weaknesses/pricing.

  • Customer research: Ignition uses AI to aggregate customer feedback across CRM, support ticketing and conversational intelligence systems. We can gauge customer sentiment, feature requests, product strengths and weaknesses, and value drivers. Ignition also uses AI to generate persona profiles that include their jobs-to-be-done, goals, frustrations, and buying process.

  • Go-to-market plans: Ignition uses AI to generate launch plans based on inputs provided by the user (e.g. tier 1-4, budget, launch goals). We’ll generate the channel plan, asset list, and required tasks needed for your launch to be successful.

What AI models does Ignition use?

At Ignition, we use multiple LLMs in the background, depending on the feature. We use Llama, Google Vertex and GPT to power Ignition.

How does Ignition give me better outputs than ChatGPT (or other tools)?

Ignition uses different LLMs for each feature to ensure the best output. We also use custom prompts, supplemented by the brand/positioning context that lives in Ignition, to enhance the quality and relevance of our AI outputs beyond other tools.


AI Output Quality

How accurate should I expect my AI outputs to be? How does that compare to other tools?

We do our best to use the most recent and advanced AI models to date. However, AI is still a new technology with limitations. For example, the most recent tests of Google’s Gemini model and ChatGPT’s GPT4.5 model show a roughly 90% accuracy rate. That means that 1 out of 10 times, AI can get the answer wrong. You should expect the accuracy of your own AI outputs to be about the same (along with other tools).

Note: 90% success rate assumes the highest quality data inputs. Limited data can negatively affect performance. Multiple steps can also complicate performance. For example, identifying, merging, and categorizing data are three separate steps that can have a 90% success rate individually. Sequencing the steps adds complexity to the action, sometimes resulting in less perfect results.

Why are the outputs I got (incomplete, incorrect, not relevant to my company)?

The most likely cause of errors is from hallucinations. Hallucinations are when AI models make up an answer. Even the best AI models hallucinate. Recent tests of Google’s Gemini model and ChatGPT’s GPT4.5 model show a roughly 90% accuracy rate. That means that 1 out of 10 times, AI can get the answer wrong. This is the biggest glaring flaw in AI technology today. We suggest re-running the prompt or revisiting your input data to minimize hallucinations.

How can I improve the outputs I get from Ignition?

First, give Ignition all the relevant information it needs about your company. Do you have a brand voice or specific messaging/positioning? Add it to your AI settings. This will help train the models and improve the output of AI copywriting. Note: this data will only be accessible and used in your own account. We do not access or use your data to train our AI models.

Next, give Ignition more relevant data! The more support tickets, customer transcripts, or CRM notes you can add into Ignition, the better our customer feedback and insights features will be. We do not share or use your data to train our products. You can connect Salesforce, Hubspot, Gong, Intercom, and Zendesk to automate this process in the integration settings.


AI Security & Privacy

How does Ignition use my data within its AI?

We do not access, share, distribute your data. We do not use your data to train any AI models. Any data you upload to Ignition will be solely limited to your account, forever.

Is my data used to train a model?

We do not use your data to train any AI models. Any data you upload to Ignition will be solely limited to your account, forever.

Is there any risk of my data leaking? How does Ignition ensure security?

We take data privacy and security seriously. We implement robust measures, including encryption protocols and strict access controls, to safeguard your data and ensure confidentiality. Additionally, we adhere to industry standards and regulations to maintain the highest level of data protection for our clients, including SOC1, SOC2 certifications and GDPR requirements.


Artificial Intelligence 101

How does AI work?

Artificial Intelligence (AI) refers to the simulation of human intelligence processes. The goal of AI is to enable machines to perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, perception, and language understanding. It’s like magic – but real. At Ignition, we’re applying these concepts to help companies manage the entire go-to-market process. Looking for more technical specifics? Here’s some of the nitty-gritty about how AI works:

  • Machine Learning (ML): Machine learning enables machines to learn from data without being explicitly programmed. It involves algorithms that iteratively learn, identify patterns, and make decisions or predictions. The higher quality and volumes of data you feed the model, the better your output. Ignition works the same way. The more customer feedback and messaging/positioning your add to Ignition, the better our output will be.

  • Deep Learning: Deep learning is a specialized form of machine learning that utilizes artificial neural networks, which are inspired by the structure and function of the human brain. Deep learning algorithms, known as deep neural networks, consist of multiple layers of interconnected nodes (neurons) that process data hierarchically, enabling the system to learn complex representations and patterns directly from raw data.

  • Natural Language Processing (NLP): NLP is a branch of AI focused on enabling computers to understand, interpret, and generate human language. NLP algorithms analyze text or speech data, extracting meaning, identifying sentiment, and generating responses or summaries. This is how Ignition can help with sentiment analysis and name recognition.

  • Reinforcement Learning: Reinforcement learning is where an AI model learns to make decisions via positive or negative feedback or responses. It allows the model to learn optimal strategies or policies to maximize cumulative rewards over time.

Overall, AI systems combine these and other techniques to process data, learn from experience, adapt to new situations, and perform tasks that traditionally required human intelligence. AI continues to advance rapidly, but there is still room for improvement with the accuracy of its outputs.

How does Ignition use AI?

Ignition uses AI in almost every aspect of the product. We see AI as a table stakes technology that has the potential to dramatically reduce the administrative work in the go-to-market process. Competitive intelligence, customer research, roadmapping, and creating content are all examples of tasks that can be accelerated with AI.

At Ignition, we use different AI models tailored to specific use cases. Each model excels in different tasks. We carefully select the most suitable model for each query to ensure users get the best possible output quality.

We do not access, share, distribute your data. We do not use your data to train any AI models. Any data you upload to Ignition will be solely limited to your account, forever.

We do our best to use the most recent and advanced AI models to date. However, AI is still a new technology with limitations. For example, the most recent tests of Google’s Gemini model and ChatGPT’s GPT4.5 model show a roughly 90% accuracy rate. That means that 1 out of 10 times, AI can get the answer wrong.

Why are the answers I’m getting from AI wrong?

Our AI features do their best to provide accurate and relevant answers to your inquiries, but occasionally, they may not meet your expectations. There may be a couple reasons why:

  • Current Limitations: Despite significant advancements, AI technologies still have limitations. They may not grasp nuances, context, or subtleties as humans do. We’re confident that we can give you the important ideas, but it may require your own knowledge to fine-tune the outputs. Frankly, this is likely the most realistic answer. AI is still a new technology that’s rapidly evolving. Sometimes it can get things wrong.

  • Hallucinations: Hallucinations are when AI models make up an answer. These made up answers often aren’t grounded in reality. They may not even be related to any data the model has been trained on. Even the best AI models hallucinate. Recent tests of Google’s Gemini model and ChatGPT’s GPT4.5 model show a roughly 90% accuracy rate. That means that 1 out of 10 times, AI can get the answer wrong. This is the biggest glaring flaw in AI technology today.

  • Training Data: Our AI features rely on the data it's been trained on. If the data is insufficient or biased, it might produce inaccurate results. We’re currently using accessible and ethical publicly available data. The answer you're looking for may require propriety or private data.

  • Complexity of the Question: AI performs best with clear and specific questions. If the query is ambiguous or complex, the AI may struggle to provide a satisfactory response.

Did this answer your question?