Contact Us

Use the form on the right to contact us.

You can edit the text in this area, and change where the contact form on the right submits to, by entering edit mode using the modes on the bottom right. 

333 Gellert Boulevard #218
Daly City, CA, 94015

650-227-3280 #101

Ignite X is a recognized, integrated marketing agency in Silicon Valley that delivers content marketing, executive branding, and public relations services.  

Blog

Ignite X specializes in helping technology startups grow their market visibility and brand. We bring expertise, connections and tenacity to helping brands break through the noise. Here are some of the things we've learned along the way. 

Filtering by Category: AI

Is OpenAI's New Search Engine for ChatGPT: A Game-Changer for the Future of Search?

Carmen Hughes

The Verge, one of the top technology news outlets, published a recent study titled "What's Next with AI?" This study revealed a striking finding: 61% of Gen Z and 53% of Millennials now favor AI tools over traditional search engines like Google for their online searches. This significant shift among younger generations raises questions about the future of search, underscoring the potential challenges that tech giants like Google may face. How will the growth of younger users using AI-search-based tools affect search giants, advertising and content?

OpenAI's Rumored AI Search Engine: A Potential Game-Changer
OpenAI's rumored plans to launch a dedicated AI-driven search engine for ChatGPT, similar to Perplexity and You.com, could disrupt the search landscape. The arrival of ChatGPT and other AI-powered search tools gives users a different search experience. Devoid of advertisements, an AI-powered search experience gives users the precise information they seek, without clicking through various links that may or may not have the right results they seek. As AI-powered search tools gain popularity among younger users, Google must accelerate its AI integration and refine its search capabilities with more powerful AI features to delight evolving user expectations. 

Google's Swift Response: Integrating AI to Enhance Search Experience
The search giant is already making great strides, with a focus on making search more intuitive, efficient and user-friendly. 

  • Google supercharged its search with GenAI, which enables Google Search to anticipate a user’s needs and provide more comprehensive answers to the search results. Imagine searching "Bryce Canyon vs. Arches for a family with young kids and a dog" and getting an answer for which park is ideal, all in a single search. 

  • Google also made its search experience more conversational. Based on a search term’s context, it can suggest follow-up questions and refine results, making it easier to find exact results faster. 

  • Google Search is now multi-modal. If you haven’t tested it yet, Google Lens integrates AI to enable searches using images or a combination of pictures and text. This enhancement is beneficial for users searching for a particular product or location. 

  • Most recently, Google has added Gemini, allowing users to prompt Gemini directly in Google’s search bar.

The Disruptive Potential of AI Search Engines
Modern AI-powered search tools offer users a more efficient search experience, potentially reshaping how we seek and engage with information online. For example, Perplexity and Phind have won over many users with their distinct capabilities of presenting focused, detailed search findings complete with associated sources.

This threat poses significant challenges for Google, a long-standing leader in the search and AI market. And there’s no way that Google will ignore this real threat.

Google has been the earliest innovator in artificial intelligence, creating its Transformer model architecture, authored by Aiden Gomez and titled "Attention Is All You Need." Google’s landmark Transformer architecture paper is the foundation model for OpenAI and many other companies that have developed large language models (LLMs). Google has continuously innovated in AI architectures and capabilities across various industries, such as its recent Med-Gemini, an advanced multimodal GenAI model fine-tuned for the healthcare industry. One of Google’s notable early breakthroughs involved a riveting face-off, DeepMind’s Challenge Match, in 2016, when, for the first time, its AlphaGo AI made self-taught moves against the world’s best Go player. 

The Rise of AI-powered Search Engines
The rise of AI search engines will trigger shifts in advertising models. If more users adopt these AI tools as their new "go-to" search tool, they could attract a larger share of digital advertising spend. Perplexity will begin placing ads in brand-sponsored form in a new form of follow-up user queries. This section is particularly pertinent because it accounts for roughly 40% of all queries on Perplexity’s platform. Perplexity plans to make these ads contextually relevant and native to the search experience. This shift has triggered Google and other traditional search engines, such as Microsoft Bing, to reevaluate their strategies and find new ways to maintain advertising revenues. 

Adapting Advertising Strategies for the AI Search Era
The rise of AI-powered search engines could significantly disrupt traditional advertising models. As users increasingly rely on these tools for their search needs, businesses and marketers may need to target their audiences effectively. AI-driven search engines could offer more targeted, personalized advertising opportunities based on users' search behavior and preferences. This shift may require advertisers to invest in new AI-powered advertising platforms and develop innovative ad formats that seamlessly integrate with the conversational nature of AI search. Businesses that successfully navigate this changing landscape will be better positioned to connect with consumers in the era of AI-powered search.

Balancing Personalization and Privacy in AI-Powered Search and Advertising
As AI becomes more integrated into search engines and advertising platforms, concerns about data privacy and user trust in these specific contexts will grow. As The Verge’s study pointed out, 78% of respondents, especially younger generations, want more transparency in AI use in digital content.​ AI-powered search and advertising rely heavily on collecting and analyzing user data to provide personalized results and targeted ads. This data capture requirement raises privacy concerns such as data profiling and filter bubbles. There’s also built-in algorithmic bias and the lack of transparency in GenAI models that can lead to unfair outcomes and erode user trust. To address these issues, search engine providers and advertisers must prioritize transparency, obtain informed user consent, and provide users with control over their data. Effective regulations for AI use in search and advertising are also needed to protect user privacy and ensure responsible practices.

Societal Implications with AI-powered Search
The rise of AI-powered search is changing how users find information and reshaping the content landscape. As users become accustomed to the improved user experience and more targeted results provided by AI-powered search tools, their expectations for content quality, relevance, and presentation are evolving. This shift will likely have a profound impact on content writers and creators, who will need to adapt their strategies to meet the changing demands of their audience.

One potential outcome is that content writers may focus more on developing comprehensive, well-researched, and source-backed content that directly addresses users' specific questions and needs. This approach could lead to a move away from short, generic blog posts and towards more in-depth, authoritative content that aligns with the style and format delivered by platforms like Perplexity.

The growing prominence of AI in search has far-reaching implications beyond privacy and transparency. As AI-powered search engines become users' primary gateway to information, these new tools influence how people access, consume, and interpret data. We’ve already witnessed this with the popularity of Reddit, for example. This shift, however, raises concerns about the potential for AI algorithms to create echo chambers, reinforce biases, or prioritize specific sources of information over others. The societal consequences could include increased polarization, spreading misinformation, and consolidating diverse perspectives publicly.

Embracing the AI Search Revolution: Where Do We Go From Here?
The Verge's study serves as a wake-up call for traditional search giants, highlighting the transformative changes ahead as AI becomes increasingly prominent in our daily lives. Beyond the search market, the rise of AI tools will have broader societal impacts, with younger generations seeking transparency in how vendors use AI across various sectors, from work and education to social interactions. As the search landscape evolves, companies that harness the power of AI while prioritizing user trust, privacy, and transparency will thrive in this new era of search and meet users’ growing demands for responsible AI use.

As AI plays a more significant role in shaping people's opinions and decision-making processes, search companies, policymakers, and society must continue to examine the ethical implications and work toward developing regulations, guidelines and safeguards to ensure that AI-powered search promotes a healthy and informed public sphere.

AI Gone Rogue: Cautionary Tales of Misuse and Blunders as GPT 5.0 Looms

Carmen Hughes

Cautionary Tales of AI Misuse

GPT5 is coming! When? No one outside of OpenAI knows for now but, according to Business Insider 'sources,' sometime in "mid-2024." So is the timing T-minus 60 days? We'll have to see when GPT 5.0 makes its debut. One glaring shortcoming of the original GPT 3.5 was its wild hallucinations, where it concocted fake statistics, quotes, URLs, non-existent reports, etc. The warning was well known to most: GPT hallucinates.

Let's review some crazy misdeeds that corporations, lawyers, universities, and others committed while harnessing AI. From trying to cut corners in the courtroom to creating AI-generated fake people to deepfakes, organizations and people have been publicly outed for their AI misuse.

Legal Blunders

Legal Beagles Busted: AI Gets Checked in the Courtroom
It’s hard to believe who is hallucinating in this legal tale.Two lawyers, Steven Schwartz and Peter LoDuca, of the law firm Levidow, Levidow, & Oberman were busted big time after submitting judicial opinions filled with bogus quotes, citations and non-existent entities. Worse, they attempted to defend their nonexistent judicial opinions after being called out for their shortcuts. The judge admonished and fined the lawyers and the Levidow law firm, making the lawyers write letters of apology to the six judges referenced in the fake citations.

Lawyer's AI Mishap: Gets Him Suspended and Fired
Last November, attorney Zachariah Crabill faced a double whammy. The Colorado State Bar suspended him for one year, and his law firm fired him after he admitted using ChatGPT to file a motion in civil court. The AI-generated motion cited incorrect and fictitious cases, which Crabill failed to fact-check prior to submitting it earlier that spring. Before a hearing, Crabill discovered that the cases cited were incorrect, yet he chose not to disclose or withdraw the motion to the court. When questioned by the judge, he initially blamed a legal intern but later confessed to relying on ChatGPT. Despite his setback, Crabill believes AI can make legal services more affordable. He has since launched his own firm that advocates for using AI responsibly as a "virtual legal assistant" to help level the playing field for lower-income clients.

Fake Faces, Real Consequences: The Pitfalls of AI-Generated Personas

Tech Conference Exposed: AI-Generated Female Speakers 
If you didn’t read about DevTernity, a software coding conference, here’s what transpired. DevTernity’s founder, Eduards Sizovs, was called out by 404 Media for not only being the person feigning as a female coder on Instagram but also concocting a ruse to give the appearance that his organization was a proponent of diversity. The goal was to provide an impression that the slate of speakers was balanced and included women to appeal to panelists and attendees. Rather than making the effort to identify, assess and secure relevant, qualified female speakers and panelists, Sizovs took a shortcut. He relied on AI-generated, fake profiles of female speakers to falsely project that DevTernity’s conference lineup was diverse. The conference imploded, with crucial speakers canceling, damaging the organization’s credibility. 

Sports Illustrated's Backlash Edition 
Unfortunately, Sports Illustrated (SI) was also busted for misleading readers with dozens of articles. In a cost-saving move, this highly regarded 70-year-old brand used AI to generate stories, but it went further. Futurism uncovered and reported that the magazine published articles under fake author names and AI-generated profile headshots. SI’s owner, The Arena Group, blamed a vendor. Many question its management and quality control, or lack thereof, over the content. Fact-checking was born in the publishing industry, so to be unaware that staff was nonexistent is dubious.

DeepFakes Go Wild

Political Deep Fakes Caught and Shut Down
OpenAI suspended a developer for using ChatGPT to make a personalized chatbot to impersonate a politician running for office. Two Silicon Valley entrepreneurs decided to create a chatbot that mimicked a Democrat hopeful running for president. Upon visiting the politician's website, visitors were shown a disclaimer. However, the super PACs' actions went directly against OpenAI’s public notice that barred people from using ChatGPT’s personalized chatbots to impersonate politicians. Unfortunately, OpenAI’s actions may still not prevent people from using open source tools to create deep fake chatbots for political purposes in the future.

Athletic Director Busted for Pushing Racist Rant Deep Fake of Principal
In April 2024, the Baltimore police arrested a high school athletic director for using AI to create and spread a racist deepfake audio recording impersonating the principal. Police said that the director, Dazhon Darien, retaliated against the principal, Eric Eiswert, for investigating him over suspicious payments submitted. Darien used ChatGPT and Bing Chat to generate the vile fake rant, emailed it to himself and select school staff, posing as a whistleblower, and then watched it go viral. The principal faced threats and needed police protection until experts confirmed the audio was an AI fake. Darien now faces charges for disrupting school operations and stalking the principal in a disturbing case of deepfake revenge slander. As deepfakes begin to take root, we should all be careful not to jump to conclusions and accept a doctored video or recording as the real thing.

AI Fiascos: From Wrongful Arrests to Killer Recipes  

Pregnant Mom Arrested Due to Faulty AI-Based Face Recognition Technology
An eight-month pregnant mom hears a knock at her door. She opens it to discover that it's the police, and they are there to arrest her. Her crime? Carjacking. The problem here is that she didn’t commit it. This gross error happened in Detroit when the AI-based facial recognition software inadvertently identified the wrong black woman. In the police department’s defense, the carjacking victim ID’d the pregnant mom from a lineup of six photos. The Detroit Police Department, however, relied on a mugshot in their database and skipped a step of comparing the photo to the pregnant mom’s driver’s license on file. Now, the department faces three lawsuits – all involving mistaken identities. 

Pak’nSave AI Recipe Generates a Chlorine Gas-based Libation
What started as a good idea - an AI-powered site that lets people plug in available food ingredients to get a recipe - turned into a hazardous concoction. Pak’nSave Meal-bot combined AI technology with smart, money-saving strategies to help households use their food. To test the AI recipe generator, a reporter entered water, bleach and ammonia as ingredients, and the Meal-bot concocted an “aromatic water mix” recipe. The resulting recipe of chlorine gas, however, triggers coughing, eye and nose irritation, and breathing difficulties. If consumed, it can be fatal. The lesson is that companies must include rules and safeguards within their AI model to protect consumers who may not know better. A simple disclaimer must go further.

Vanderbilt University Apologizes for using ChatGPT to Write Mass-Shooting Condolence
Rather than rely on internal staff to publicly address a mass shooting at another university in Michigan, Vanderbilt University Peabody School decided to rely on AI to create and send a message via a mass email. The communication was factually incorrect, and the message noted that the author had prepared the content using OpenAI's ChatGPT. Vanderbilt University's decision and approach were insensitive because the event involved a human tragedy. Its action put the university in a bad light, calling into question its decision-making and empathy.

Using AI to Do Bad - UnitedHealthcare Deploys Faulty AI to Profit 
In an ongoing class action lawsuit, health insurer UnitedHealthcare, knowingly uses an AI algorithm to wrongfully deny elderly patients healthcare owed to them under Medicare Advantage Plans. The insurer systematically overrides physicians’ recommendations to deny elderly patients extended critical care facilities needed. UnitedHealthcare relies on its faulty AI model despite knowing its 90% error rate. The financial scheme enables UnitedHealthcare to collect premiums without paying for the critical healthcare that seniors need. To date, UnitedHealthcare still employs its faulty AI to maximize its profits at the expense of elderly patients.

Navigating the Future of AI: Lessons Learned and the Path Forward
The rise of powerful AI tools has brought excitement and concern. While AI tools will revolutionize most industries and make our lives easier, they also come with significant risks when misused or applied without proper safeguards. The examples we've explored – from lawyers submitting fake cases to deepfakes – highlight the importance of fact-checking AI outputs, implementing robust safeguards, and being transparent about the use of AI in decision-making processes. These cautionary tales teach important lessons. We must commit to AI's responsible development and deployment to harness its power for good while reducing the risks of AI misuse and unintended consequences.

The future of AI is bright, but it's up to all of us to ensure that it's a future we can trust. Let's learn from the mistakes of the past and work together to build an AI-powered world that benefits everyone.