Author: thiagomateria@gmail.com

  • The Importance of using Email Marketing for your business

    The Importance of using Email Marketing for your business

    In Today’s digital world, it’s crucial to have a good email Marketing strategy to build a relationship with your leads, and to convert new prospects into buying customers and first time customers into recurring clients.

    But the overall aspect of having an email list as the common marketing jargon says, goes way beyond the idea of just selling your products and services; being connected with your audience helps you identify your SWOT analysis, understand your customers, desires and general perception of your brand.

    Although virtual reality, AI, chatbots, SEO, social media and affiliate marketing are growing trends that make businesses feels like email marketing is on the downfall, the truth is that email marketing is still the most powerful strategy to build your client base.

    So why is email marketing so important? The main reason for this lies on the fact that most consumers and non consumers see the email option as a safe source of contact, and they don’t feel intimidated by exposing themselves, it’s like receiving a letter through the post from your bank or the solicitors.

    But there are many other benefits for a business to have an email list, imagine if Facebook, google or any other social media platform changes their cultural codes, conventions or blocks your account for data violations or breach of their terms as we have seen in the past with many companies around the world or could even be replaced by new technology or a new trending platform that everybody wants to do business with, if that happens, and you solely rely on them, you run the risk of losing a lot of business, so from the moment you have a list of valid emails, that’s your business asset and no one can take that away from you. Your email list is gold, because you are gradually building a customer base for your business and also for future products and services you may wish to sell or promote, so although social media platforms and advertising platforms are a great way to promote your brand, it seems that as enthusiasts believe email marketing will die, it is still proven to be the most effective form of professional communication, and we believe it’s going to be around forever.

    Here is a video that summarizes this:

    By the way you’ve probably heard the term sales funnel or a magnet lead page at some point during your journey as an inspiring marketer or someone who is interested in building a business online. A sales funnel is an illustrative form of navigating through the consciousness of your potential avatar, it has 3 stages:

    • Cold (Top)
    • Warm (Middle)
    • Hot (Bottom)

    The top of the funnel is generally someone that has never heard of your services and maybe through your social media, organic or paid advertising campaign saw a video, an image or a piece of content that resonated and connected with them, as soon as this happens the prospect turns into a warm lead as they deepen their awareness of the existence of your proposal, whether a service or a product, the final stage would be a hot lead when he buys from you. And becomes a customer.

    A good strategy to develop a sales funnel is usually aligned with what we call a magnet landing page, where you offer something for free to your potential avatar in exchange for their email such as an e-book, masterclass or even a private one to one call. Then you advertise to this potential avatar and inform your cold lead that you are offering this service or product for free.

    Below is an example of a landing page offering a free e-book as a lead magnet:

    Once you’ve collected that lead then what happens? Well, The potential lead receives an email with their promised freebie, and then what? The next step would be to set up an automation of emails to warm up your lead and deliver as much value as possible inline with your brand values and mission objectives.

    But how do I get them to open the next emails, as people receive loads of emails on their inboxes every day and feel that it’s all junk? Excellent question, but lots of wise businesses use what is called “copy” on their email campaigns, which simply means the ability to persuade people using attractive and engaging writing skills, in the marketing world the person responsible for creating effective copy for emails is called a “copywriter” and this role is crucial during any email marketing campaigns because it will determine whether consumers will open and click on your emails or not.

    Emails have 2 key elements that all recipients look at, and that is the sender, and more importantly the subject line, getting your potential leads to open your emails is the number one factor that makes all the difference in your email campaign, because if your email isn’t open, then the rest of the message doesn’t matter.

    Another key element of your marketing campaign is the CTA (call to action), every email your team writes should have a clear objective.

    Selecting a good mail provider like Mailchimp, Mailerlite, convert kit and trust me the options are infinite, however do pay close attention to the delivery, because some companies have a lack in delivery and the emails tend to end up in people’s junk or spam folders rather than their inboxes, which is where you should be aiming for, so the tip here is do shop around and test a few of these tools for free as most of them have a free trial period before you need to commit.

    Finally, when writing your emails think about the person who is reading it and try to make your emails as human as possible, as many businesses fail with their email marketing campaigns because they make their tone of voice unnatural, which makes the reader perceive that this not a real human who wrote this.

  • Reducing waste and optimizing technology for Odd box on a Group project

    Reducing waste and optimizing technology for Odd box on a Group project

    Working as a part of a marketing team requires many skills including intelligence, love, passion and a strategic approach towards the goals and objectives of any given task. There are many lessons to be learned from teamwork, one of the most important is that sometimes things do not work as we plan it to be, and this can cause confusion, stress and an undesired final result. But there is always a light at the end of the tunnel when a positive approach and a good plan is in place to validate the initial thoughts, concerns and considerations.

    The groups initial task to analyse https://www.oddbox.co.uk which is a phenomenal business that benefits the environment and reduces food waste by selecting fruit and vegetables that are about to expire and resell this on a subscription basis to consumers in the London area.

    Odd box connects with farmers around the UK who have fruit and vegetable that are about to be wasted and links these with consumers like yourself to select the size of the box you would like to receive at your doorstep.

    Obviously you don’t always receive the best looking fruit and vegetable, but you are conscious that you are reducing food waste and helping the environment.

    I was assigned by the marketing team to analyse technological solutions that would help boost and improve Odd box business model to be more efficient, while providing consumers the same value proposition; the initial steps by our group of marketing specialists was to look at the current technology being used by the business, analysing their website, and finding out what their competitors are doing at the moment. We all collectively decided to conduct an in depth market research analysis into the possibility of using new technology and the team came up with a variety of practical cost-effective solutions that would benefit Odd box and improve their reputation through the use of NFC tags, AI and blockchain technology.

    A colleague and I were assigned to dealing with the use of NFC tags for the business and how useful this could be for the company, we initially researched how many mobile phones in London are compatible with the technology and found that 70% of phones are readily available, we then looked at the benefits of NFC tags and found that they are fast, versatile, simple and no installation required, and the most important, not expensive to set up, however the drawbacks we found were related to data privacy, where someone could steal this data from a customer who receives a box at home, so we looked at encryption which resolved this hurdle, our final concern was where and how would we implement this, and we found that the best place would be the inside flap where the customer would be able to see it as soon as he opened the box, the team were happy with our findings, and we strategically implemented this into our final idea.

    Below is a picture of our NFC tag idea.

    We were very glad that this project was a huge success as we had 2 members from SAS Mike Turner and Neil Griffin who validated our initial ideation, please find below a copy of the groups final video:

    And also an image of the winning team from the competition we took part in for the Oddbox project.

  • AI Generated Synthetic Data

    AI Generated Synthetic Data

    Synthetic Data:
    The Invisible Fuel
    Powering the
    AI Revolution

    Summary

    Synthetic data, artificially generated information that statistically mirrors real-world datasets has emerged as one of the most strategically significant technologies in modern AI development. This article examines its definition, generation techniques, industrial applications, documented risks, and regulatory landscape, drawing on research and market data from 2024 to 2026. The central argument is that synthetic data is not a replacement for reality, but a disciplined amplifier of it one that demands rigorous governance to deliver its promise without introducing new and systemic harms.

    Section I — What It Is

    Defining Synthetic Data: From Workaround to Strategic Asset

    Every powerful technology has an origin story that begins with a problem no one wanted to solve the hard way. Synthetic data is no different. Initially conceived as a method to protect user privacy, a way to train systems on data that looked like credit card numbers or phone records without ever touching the real ones, it has since evolved into something qualitatively more significant. Today, synthetic data is artificially generated information that mimics real-world data, used not merely to protect privacy but to fill data gaps, simulate rare events, test new scenarios, and scale AI training pipelines to previously impossible dimensions.[1]

    The World Economic Forum’s Global Future Council on Data Frontiers, in its September 2025 briefing paper Synthetic Data: The New Data Frontier, drew a sharp line between the old and the new: “Synthetic data is no longer just a tool of necessity, it’s a driver of innovation.”[2] Entire urban environments can now be replicated for autonomous vehicle testing. Media companies generate massive synthetic training datasets for recommendation systems. Healthcare researchers test treatment plans at scale using synthetic patient records without ever approaching a real medical file.[2]

    The formal definition is deceptively simple: synthetic data is artificially generated information that statistically preserves the properties, distributions, and relationships of real-world data. A synthetic healthcare record will have the right demographic distributions, correlating comorbidities, and realistic drug prescriptions, without representing any actual patient. A synthetic financial transaction dataset will exhibit the right fraud signatures, seasonal patterns, and account behaviours, without exposing a single real customer. The art is in the generation; the challenge is in the validation.

    $1.88BAI-generated synthetic tabular dataset market, 2025[3]

    37.9%Compound annual growth rate, 2024–2025[3]

    $6.73BProjected market size by 2029[3]

    <5%Accuracy degradation of leading methods vs. real data[4]


    Section II — How It’s Made

    The Generation Toolkit: GANs, Diffusion Models, and Beyond

    Understanding synthetic data requires understanding how it is made — and the technical arsenal has expanded dramatically. The three primary architectural families currently dominate the field, each with distinct strengths and failure modes.

    Generative Adversarial Networks (GANs)

    Two neural networks a generator and a discriminator compete in an adversarial process until the generator produces data indistinguishable from real examples. Long considered the gold standard for high-fidelity image synthesis, GANs excel at visual data but are prone to training instability and “mode collapse,” where the generator produces only a narrow range of outputs.[5]

    Diffusion Models

    The dominant paradigm in image generation as of 2025, diffusion models work by learning to reverse a process of adding noise gradually denoising random patterns into structured, realistic outputs. They offer superior diversity and stability compared to GANs but carry higher computational cost. PMC research confirms they now underpin most text-to-image and image-to-image systems.[6]

    Variational Autoencoders (VAEs)

    VAEs encode data into a latent probability space and then decode it back into synthetic outputs. They carry lower computational costs than GANs, are better suited to smaller datasets, and do not suffer from mode collapse. Particularly effective for generating structured medical records including image, numerical, and bio-signal data VAEs remain a core tool in healthcare synthetic data pipelines.[7]

    LLM-Based Generation

    Large language models like GPT-4, Llama, and DeepSeek are increasingly used as “draft machines” for synthetic text data generating instructions, dialogues, rationales, and tool traces constrained by domain rules and templates. The process is not unconstrained hallucination: models are guided by prompts, filtered by quality checks, and validated before entering training pipelines.[8]

    The practical workflow, as described by AI practitioners writing in late 2025, typically involves layering these approaches. A team might use an LLM to generate 1,000 variants of a discharge instruction or logistics exception, then apply machine learning scoring, clustering, and diversity checks to remove near-duplicate patterns before the data enters a training pipeline.[8] The goal throughout is control, not abundance: synthetic data is most valuable when it is specific targeting long-tail events, domain-specific gaps, and multimodal workflow combinations that real data cannot provide efficiently.[8]

    “The competitive edge won’t come from who has the shiniest frontier model license it will come from who runs the smartest flywheels: curated human corpora, disciplined synthetic data generation, and relentless validation on messy real-world data.” InvisibleTech AI, December 2025


    Section III — Industrial Applications

    Where Synthetic Data Is Already Changing the World

    The industries being transformed by synthetic data are not hypothetical. The applications are live, documented, and in several cases mission-critical.

    Healthcare

    Patient Privacy Without Research Paralysis

    Synthetic patient data enables researchers to test treatment plans at scale, train diagnostic AI on synthetic medical imaging, and model drug interactions without exposing protected health information. Published in Frontiers in Digital Health (February 2025), research on rare disease confirms that VAE-GAN hybrid models now generate high-quality patient records that bridge data gaps without compromising confidentiality.[7] The European Health Data Space is emerging as the regulatory framework for this work across the EU.[7]

    Finance

    Fraud Detection Without Fraud Exposure

    Financial institutions use synthetic transaction data to train fraud detection systems, test risk management models, and optimise portfolios particularly when access to real-world financial data is limited or raises concerns for financial agencies. Research published via arXiv in 2023 and cited across 2025 literature comprehensively catalogues these applications in risk assessment, portfolio optimisation, and algorithmic trading.[9]

    Autonomous Vehicles

    Safety-Critical Scenarios at Scale

    Training self-driving cars requires exposure to rare, high-risk scenarios the kind that are prohibitively dangerous and expensive to stage in the real world. Companies like Waymo now replicate entire urban environments synthetically. A ScienceDirect December 2025 review of generative models for transportation systems documents GAN and diffusion model applications across trajectory generation, traffic flow prediction, and sensor data simulation for autonomous driving systems.[10]

    Large Language Models

    Solving the Training Data Scarcity Crisis

    The data landscape for AI training has become dramatically more restrictive. News media and Reddit now protect their intellectual property and sue AI labs for non-compliance. Cloudflare introduced pay-per-crawl for 37.4 million hosted websites as of August 2025. In this environment, synthetic data has become an affordable, scalable solution to the “cold start problem” that engineers face when real data is locked, licensed, or legally inaccessible.[11]

    Retail & Marketing

    Personalisation Without Privacy Risk

    Synthetic customer data helps companies model purchasing behaviours, predict trends, and develop personalised recommendation systems without compromising real customer information. The same approach allows retailers to stress-test loyalty programs, pricing models, and supply chain scenarios using artificially generated consumer populations with statistically realistic characteristics.[12]

    Rare Disease Research

    Bridging the Data Desert

    Rare disease research faces a structural crisis: small, heterogeneous patient populations make clinical trial design and AI-assisted diagnosis severely data-limited. Synthetic data generation, including Conditional VAEs designed specifically for small datasets, are allowing researchers to generate diverse and representative patient records — enabling global research collaboration without the legal and ethical barriers of sharing real patient data across borders.[7]


    Section IV — The Hidden Risks

    The Jekyll and Hyde Problem: When Synthetic Data Goes Wrong

    Every powerful tool carries a shadow, and synthetic data is no exception. The research community has identified a cluster of risks that, taken together, constitute what the World Economic Forum calls “significant and systemic threats” that are “magnified by the difficulty of distinguishing between AI-generated and real-world data.”[2] The most alarming of these is a phenomenon researchers have named model collapse.

    Model Collapse: The Recursive Trap

    Model collapse occurs when AI systems are iteratively trained on their own synthetic outputs, causing progressive quality degradation that follows a two-stage pattern. First, the model loses the long-tail detail of the original human data distribution rare events, edge cases, and unusual patterns are smoothed out. Then, distinct modes blur together until outputs no longer resemble the real data they were meant to replicate.[13] Researchers have named the worst outcome “Model Autophagy Disorder” (MAD): models that recursively train on their own outputs inevitably lose either quality or diversity unless each training round incorporates sufficient fresh, real data.[13]

    A telling analogy from the research: A model trained on the boiling points of known elements may eventually be asked about “an element with atomic number 500.” There is no such element, but the model will extrapolate anyway. If retrained on those outputs, it builds a periodic table of imaginary matter that no longer maps to real chemistry. Catastrophic forgetting follows the model loses previously learned capabilities and data poisoning compounds the problem as the model begins preferring its own synthetic approximations over accurate real-world examples.[11]

    The solution, current research agrees, is surprisingly accessible. Adding as little as 10% of authentic human-generated data to an otherwise synthetic dataset significantly improves model confidence and accuracy.[11] The endgame for practitioners is not “all synthetic, no humans” it is a hybrid pipeline in which synthetic data expands and stress-tests a core of carefully curated real data, while human-in-the-loop validation keeps the system grounded.[8]

    Privacy Leakage: The Residual Risk

    A second and less-discussed risk is that synthetic data does not always fully anonymise its source. When outliers or unique identifiers are not properly handled during generation, statistical traces of real individuals can remain in synthetic datasets, making it possible to reverse-engineer real records from synthetic ones reintroducing the very privacy risks that synthetic data was meant to eliminate.[14] A November 2025 regulatory review published in npj Digital Medicine confirmed that ethical concerns around residual privacy vulnerability and insufficient oversight remain primary concerns in healthcare synthetic data governance.[15]

    Bias Amplification and “Data Laundering”

    Synthetic data generated from biased real-world datasets inherits and can amplify those biases. If the source data underrepresents certain demographics, the synthetic version will do the same at scale. More troubling still is a practice critics have called “data laundering”: using synthetic data to obscure the provenance of training data, potentially circumventing consent frameworks and copyright protections that real data would require.[16] As the WEF observed, deepfakes and AI-generated voice cloning represent the most publicly visible manifestation of this risk: when people can no longer trust what they see or hear, “the consequences ripple far beyond technical systems.”[2]

    The regulatory response is accelerating. The EU AI Act now requires organisations to explore synthetic data alternatives before processing personal data, making synthetic data governance a compliance obligation rather than a best practice. GDPR fines totalling over $2.77 billion between 2018 and 2023 have already made privacy-preserving technologies essential for European enterprise operations. Nineteen US state privacy laws came into effect by 2025.[4]


    Section V — Validation and Governance

    Measuring What Cannot Be Seen: The Quality Framework Problem

    One of the most practically important and least publicly discussed challenges in synthetic data is validation. How does an organisation know whether its synthetic dataset is actually good? The question is more complex than it appears, because “good” has at least three separate dimensions that can conflict with one another.

    Emerging industry frameworks conceptualise synthetic data quality across three axes. Fidelity how statistically close the synthetic data is to the real-world source. Utility how effectively models trained on synthetic data perform on real-world tasks. And Privacy how robustly the synthetic data prevents re-identification of the real individuals in the source dataset.[4] Maximising all three simultaneously is structurally difficult: techniques that improve fidelity often reduce privacy, while techniques that maximise privacy can degrade utility.

    Academic research presented at KDD 2025 (the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, held in Toronto) explored the generation and evaluation of synthetic survey data, highlighting that standard quantitative metrics often fail to capture scientific or domain-specific relevance underscoring the need for expert-in-the-loop validation protocols that go beyond automated scoring pipelines.[17] A BMC Medical Informatics paper published in 2025, SynthRo, presented a dedicated dashboard for evaluating and benchmarking synthetic tabular data in healthcare, representing early progress toward standardised quality assessment.[18]

    A March 2025 academic consensus, published via arXiv, emphasised the need for stronger privacy metrics particularly around identity and attribute disclosure noting that current industry measures often fall short of what regulators and researchers require.[19] The governance gap is real: the Lancet Digital Health has called for urgent development of synthetic data privacy frameworks for medical research, and ISO/IEC AWI TR 42103 is in active development to provide international standardisation.[15]

    “Realising the benefits of synthetic data while mitigating known risks is a shared responsibility between engineers, policy advisors, executives, and users working collaboratively and proactively.” World Economic Forum, Global Future Council on Data Frontiers, October 2025


    Section VI — Strategic Outlook

    The Hybrid Future: Where Synthetic Data Fits in the AI Stack

    The strategic consensus that has emerged across the research community and practitioner literature is clear and consistent: the future of AI training data is hybrid. The question is no longer whether to use synthetic data, but how to integrate it responsibly into pipelines that remain grounded in human truth.

    MIT researcher Kalyan Veeramachaneni, speaking to MIT News in September 2025, summarised the fundamental principle: synthetic data works best as a complement to real-world data, not a replacement for it.[20] IBM’s November 2025 analysis of the promises, risks, and realities of synthetic data reached the same conclusion from an enterprise perspective: “synthetic data should complement real-world data, not replace it” and that creating high-quality synthetic data “requires thoughtful design, careful validation and ongoing monitoring.”[14]

    The competitive landscape for synthetic data tooling is maturing rapidly. Enterprise platforms from MOSTLY AI, Gretel, Hazy, Synthesis AI, and Sogeti (part of Capgemini) now offer privacy-preserving synthetic data generation across tabular, visual, and text modalities.[21] Open-source alternatives including Synthea (for healthcare) and the Synthetic Data Vault (SDV) ecosystem (for tabular, relational, and time-series data) are making enterprise-grade capabilities accessible to research teams and smaller organisations.[21]

    Perhaps the most telling signal of synthetic data’s strategic maturity is the regulatory posture of major jurisdictions. Regulators are no longer merely tolerating synthetic data they are beginning to mandate its exploration. When the EU AI Act requires organisations to test synthetic alternatives before processing personal data, the technology moves from the experimental budget to the compliance budget. And compliance budgets, as every enterprise knows, are where technologies achieve durability.

    The Bottom Line

    Synthetic data is neither a magic solution nor a dangerous distraction. It is a disciplined engineering approach to a genuinely hard problem: AI systems need more data, better data, and more diverse data than the real world can safely or affordably provide.

    Used with rigour anchored in real human data, validated across fidelity, utility, and privacy dimensions, and governed by frameworks that evolve with the technology synthetic data is one of the most powerful tools available to the AI development community. Used carelessly, it risks hollowing out the very systems it is meant to improve.

    The organisations that will lead the next decade of AI are those that learn to tell the difference.

    All sources reflect published research, industry reports, and peer-reviewed literature from 2023–2026. URLs verified March 2026.

    1. World Economic Forum.Artificial Intelligence and the Growth of Synthetic Data.October 2025.weforum.org
    2. World Economic Forum, Global Future Council on Data Frontiers.Synthetic Data: The New Data Frontier.Briefing Paper, September 2025.reports.weforum.org
    3. Research and Markets / Globe Newswire.AI-Generated Synthetic Tabular Dataset Global Market Report 2025.January 29, 2026.globenewswire.com
    4. Articsledge.What is Synthetic Data? Complete 2026 Guide to AI-Generated Data.March 2026.articsledge.com
    5. arXiv.Generative AI for Autonomous Driving: A Review.December 2, 2025.arxiv.org; PMC.Synthetic Scientific Image Generation with VAE, GAN, and Diffusion Model Architectures.J Imaging, 2025.pmc.ncbi.nlm.nih.gov
    6. PMC.Synthetic data generation by diffusion models.pmc.ncbi.nlm.nih.gov
    7. Mendes JM, Barbar A, Refaie M.Synthetic data generation: a privacy-preserving approach to accelerate rare disease research.Frontiers in Digital Health, 7:1563991, February 25, 2025. doi:10.3389/fdgth.2025.1563991.frontiersin.org
    8. InvisibleTech AI.AI Training in 2026: Anchoring Synthetic Data in Human Truth.December 3, 2025.invisibletech.ai
    9. Potluru VK, et al.Synthetic data applications in finance.arXiv:2401.00081, 2023; as cited in arXiv.Escaping Model Collapse via Synthetic Data Verification.October 18, 2025.arxiv.org
    10. Lin H, et al.Generative models for the evolution of transportation systems.ScienceDirect, December 2025. doi:10.1016/j.ssaho;sciencedirect.com
    11. Xenoss.How to Use Synthetic Data in 2025: Benefits, Risks, Examples.August 28, 2025.xenoss.io
    12. Netguru.Synthetic Data: Revolutionizing Modern AI Development in 2025.September 9, 2025.netguru.com
    13. ManageEngine Insights.AI Model Collapse: The Synthetic Data Trap and How to Avoid It.December 17, 2025.insights.manageengine.com
    14. InformationWeek.The Real-World Benefits and Risks of Synthetic Data.November 13, 2025.informationweek.com
    15. PMC / npj Digital Medicine.Protecting patient privacy in tabular synthetic health data: a regulatory perspective.Published November 28, 2025. doi:10.1038/s41746-025-02112-0.pmc.ncbi.nlm.nih.gov
    16. WebProNews.Synthetic Data in AI: Scalability Benefits and Hidden Risks.August 26, 2025.webpronews.com
    17. Jiang Y, et al.Synthetic Survey Data Generation and Evaluation.ACM SIGKDD (KDD ’25), Toronto, August 2025. doi:10.1145/3690624.3709421.dl.acm.org
    18. Santangelo G, et al.How Good Is Your Synthetic Data? SynthRo, a Dashboard to Evaluate and Benchmark Synthetic Tabular Data.BMC Medical Informatics and Decision Making, 25(1):89, 2025.
    19. DSC Next Conference / arXiv consensus.Synthetic Data: The Future of Data Science in 2025.September 2025.dscnextconference.com
    20. Veeramachaneni K.3 Questions: The Pros and Cons of Synthetic Data in AI.MIT News, September 3, 2025.news.mit.edu
    21. IBM Think.Examining Synthetic Data: The Promise, Risks and Realities.November 18, 2025.ibm.com
    22. LinuxSecurity.Leading Synthetic Data Solutions for AI Development and Testing in 2025.September 21, 2025.linuxsecurity.com
  • A Study on Consumer Adaptability to Emerging Marketing Technologies

    A Study on Consumer Adaptability to Emerging Marketing Technologies

    Abstract

    The marketing technology landscape is undergoing its most profound structural transformation in a generation. Driven by the mainstreaming of artificial intelligence, the maturation of immersive media, a radical shift in consumer privacy expectations, and the entrenchment of omnichannel commerce, brands and consumers alike are navigating unprecedented change. This study synthesises recent empirical research, industry surveys, and market data (2024–2026) to assess how consumers are adapting or resisting the wave of emerging marketing technologies, and what strategic implications this holds for practitioners.

    The Scale and Speed of the MarTech Transformation

    The global marketing technology market does not merely represent rapid growth — it represents a fundamental restructuring of how commerce, communication, and consumer relationships operate. According to Grand View Research, the global MarTech market was valued at USD 551.96 billion in 2025 and is projected to reach USD 2,380.49 billion by 2033, growing at a compound annual growth rate (CAGR) of 20.1%.[1] Separately, Precedence Research places the 2025 market valuation at USD 557.94 billion, with projections reaching USD 3,286.94 billion by 2035 at a CAGR of 19.4%.[2] While precise figures vary by methodology, the directional consensus is unambiguous: this is one of the fastest-expanding sectors in the global economy.

    The drivers behind this expansion are structural rather than cyclical. The proliferation of e-commerce, the transition to first-party data strategies, the rise of AI-driven personalisation platforms, and the growing consumer demand for seamless cross-channel experiences have all converged to make advanced MarTech not merely an efficiency tool, but a competitive necessity.[1] Digital marketing alone accounted for the largest revenue share — exceeding 63% — within the MarTech market in 2025, underscoring how thoroughly the digital channel has displaced traditional advertising infrastructure.[2]

    Yet scale alone does not tell the full story. What makes the current moment uniquely significant is the nature of the technologies driving this growth. Artificial intelligence, augmented and virtual reality, agentic automation, and privacy-preserving data architectures represent qualitatively different capabilities from the email marketing platforms and web analytics dashboards of the previous decade. For the first time, the technologies available to marketers are sufficiently sophisticated to fundamentally alter — rather than merely augment — the consumer decision-making process.

    AI as the New Engine of Marketing: Capability, Adoption, and Consumer Response

    Artificial intelligence has completed its transition from experimental technology to operational infrastructure across the marketing function. According to HubSpot’s State of AI 2025 Report, over 74% of marketers now integrate AI into their campaigns — a significant increase from previous years.[3] Adobe’s Digital Trends Report, drawing on a survey of business leaders fielded from October to November 2025, found that organisations reported meaningful improvements across key customer experience metrics over the preceding three years, with 70% reporting improved personalisation, 64% improved lead generation, and 59% improved customer retention.[6]

    These gains are materialising across the full marketing stack. AI-powered tools now perform predictive analytics that forecast consumer behaviour with increasing precision, automate content generation and campaign optimisation, and enable customer segmentation at a granularity that was previously computationally impossible.[7] A ScienceDirect analysis published in September 2025, synthesising the literature on AI adoption in marketing, concluded that through machine learning algorithms and deep learning frameworks, marketers can create messages closely aligned with individual customer preferences and historical behaviours — with AI-driven personalisation demonstrably improving customer engagement and satisfaction.[7]

    The Rise of Agentic AI

    Looking beyond generative AI, a significant transition is now underway toward “agentic AI” — autonomous systems capable of executing multi-step marketing tasks with limited human intervention. Adobe’s 2025 research found that about one-third of organisations are already prioritising agentic AI deployment over more widely adopted generative AI systems. Furthermore, 63% of organisations expect agentic AI to free employees for strategic and creative work, while 42% plan to design AI agents with distinct personalities tailored to different audience segments.[6]

    This evolution carries significant strategic implications. As CMSWire’s analysis of Brinker’s 2026 MarTech Report observed, the dominant pattern in 2025 was “AI as a power screwdriver” — accelerating content production and segmentation without fundamentally changing what was possible. In 2026, the inflection point has arrived: leading marketing teams are channelling AI efficiency gains into net-new capabilities, including more extensive experimentation, greater creative variation, and personalisation journeys that would be structurally impossible with human-only teams.[8]

    “If all you get from AI is lower unit cost, you’re leaving most of the value on the table.”— Scott Brinker, Chief MarTech Officer, HubSpot, as cited in CMSWire, 2026

    The organisational challenge is significant. Adobe’s research found that most organisations agree AI is changing work faster than employees can adapt (57%), and that those who do not embrace AI will fall behind (58%). Yet only 45% of organisations say they have sufficient AI training and upskilling programmes, and only 44% believe employees are comfortable using AI in their roles.[6] Consumer-facing AI deployment, in other words, is outpacing the internal readiness of the organisations deploying it.


    Section III — The Privacy Paradox

    Personalisation, Privacy, and the Collapse of Consumer Trust

    Perhaps no dynamic in the current MarTech landscape is more consequential — or more poorly understood — than the tension between personalisation and privacy. Consumers, research consistently shows, value the convenience and relevance that personalised experiences deliver. Yet the same consumers hold deep and growing anxieties about the data practices that make personalisation possible.

    The scale of this anxiety is striking. Deloitte’s sixth Connected Consumer Survey, fielded in June 2025 with approximately 3,500 US consumers, found that the proportion of respondents worried about data privacy and security jumped from 60% to 70% in a single year. The same survey found that 47% of consumers had experienced at least one type of digital security failure in the past year, and 58% had encountered at least one scam attempt — including phishing, deepfake videos, and AI-generated voice cloning.[5]

    A complementary survey by Relyance AI, polling more than 1,000 US consumers in December 2025, produced even sharper findings. 82% of respondents described AI data loss-of-control as a serious personal threat, with 43% characterising it as “very serious.” Perhaps most striking for brand strategy: 76% of consumers said they would switch to a competitor if that company could prove better data transparency, and 50% would forgo the lowest price to choose a brand with verifiably superior data practices.[9]

    The “Cautious Engagement” Model

    A 2025 study published via SSRN, drawing on surveys of 217 active social media users and guided by Privacy Calculus Theory, identified a behavioural pattern researchers call the “Cautious Engagement Model”: users recognise and appreciate the benefits of AI-driven personalisation, while simultaneously managing privacy risks through selective engagement and privacy controls.[10] This is not passive acceptance — it is a dynamic negotiation that consumers are conducting in real time, often without adequate tools or information to do so effectively.

    Research published in Advances in Consumer Research in late 2025, using NVivo qualitative analysis of semi-structured interviews, confirmed four dominant consumer themes in AI-personalised marketing contexts: adverse impacts including privacy loss and manipulation concerns; positive impacts including relevance and convenience; mechanisms that foster trust (transparency, control, reciprocity); and the moderating role of regulatory environment.[11] The study concluded that consumers are not opposed to AI personalisation — they are opposed to AI personalisation conducted without transparency, control mechanisms, and clear reciprocal value.

    The Usercentrics “State of Digital Trust in 2025” global study, surveying 10,000 consumers across six markets, found that 42% of consumers now read cookie consent banners “always” or “often” — up substantially from previous years — and that 46% accept cookies less frequently than they did three years ago.[12] Consent is no longer a friction point to be minimised; it is a brand touchpoint being evaluated.


    Section IV — Immersive Technologies

    AR, VR, and the Emergence of Immersive Consumer Experiences

    Augmented reality (AR) and virtual reality (VR) have occupied the status of “emerging technologies” for much of the past decade, simultaneously promising transformative potential and consistently underdelivering on mainstream adoption. The 2025 data suggests this dynamic may finally be shifting — albeit unevenly across different platforms and use cases.

    The global VR market reached USD 16.32 billion in 2024, with projections suggesting growth to USD 123.06 billion by 2032.[13] In the AR segment, the global market is projected to exceed USD 50 billion in 2025. AR/VR headset shipments grew 18.1% in Q1 2025 compared to the prior year, and the AR/VR hardware sector is expected to grow at a 38.6% CAGR between 2025 and 2029.[13] The spatial computing market overall is projected to surge from USD 20.43 billion in 2025 to USD 85.56 billion by 2030, at a 33.16% CAGR.[14]

    For marketing practitioners, the most relevant metrics concern consumer engagement rather than hardware shipments. AR-based marketing campaigns average a dwell time of 75 seconds — substantially higher than standard digital advertising formats — and approximately 80% of businesses that have implemented AR lenses or filters report an improvement in brand awareness metrics.[13] The average metaverse user is currently 28.7 years old, placing the primary audience squarely in the millennial and older Gen Z bracket.[13]

    Experimental Optimism and Commercial Caution

    Despite bullish market projections, brands are approaching immersive marketing with well-founded caution. While over half of brands surveyed (52%) believe their customers are ready to engage with metaverse platforms, only 26% expect immediate return on investment from metaverse marketing initiatives.[13] A 2025 review of marketing strategy predictions noted that widespread adoption of AR and VR for marketing “remains limited and largely experimental” — with retail’s AR try-on functionality representing a bright spot, but mainstream deployment still some years away for most sectors.[15]

    An International Journal of Marketing and Technology analysis published in October 2025 confirmed that emerging technologies including AI content optimisation, VR product demonstrations, and blockchain influencer verification are actively shaping the next generation of social media marketing — but that growing regulatory scrutiny, platform policy changes, and privacy concerns simultaneously require marketers to adopt transparent and ethical practices that preserve consumer autonomy.[16]


    Section V — Omnichannel Commerce

    The Omnichannel Imperative: From Strategy to Survival

    If any single concept has crystallised from a strategic aspiration into a market prerequisite during the current period, it is omnichannel commerce. The data on consumer behaviour leaves little interpretive room: 73% of retail shoppers use multiple channels during a single buying journey, and 83% of customers research products online before visiting a physical store.[4] Consumers are no longer choosing between digital and physical retail — they are demanding that brands make the distinction irrelevant.

    The commercial consequences of meeting — or failing to meet — this expectation are significant. Research demonstrates that brands with strong omnichannel engagement retain 89% of customers, compared to only 33% for brands with weak omnichannel implementations.[4] Omnichannel retailers report 179% faster revenue growth than non-integrated competitors, and retailers reaching consumers through three or more channels generate 250% more engagement than single-channel retailers. Omnichannel shoppers themselves deliver approximately 30% higher lifetime value compared to single-channel shoppers.[4]

    Academic research in Journal of Retailing and related literature has established the theoretical underpinning: omnichannel marketing is grounded in service-dominant logic and customer experience frameworks that emphasise value co-creation across the full customer journey.[16] The practical implication is that omnichannel strategy requires not merely the integration of digital and physical touchpoints, but a fundamental reorientation of organisational structure, data architecture, and customer service philosophy.

    “In 2025, the omnichannel experience is no longer optional — it’s the baseline expectation.”— Marketing LTB Omnichannel Statistics Report, 2025

    Despite the compelling evidence, adoption remains incomplete. Industry data indicates that less than 50% of brands currently use MarTech to track customers across channels, even though 96% of brands state that customer experience is important across both online and offline contexts.[17] This gap between aspiration and implementation represents both the central challenge for marketing technology investment in the near term, and a significant opportunity for first-movers who close it.


    Section VI — Theoretical Framework

    Understanding Consumer Adaptability: Technology Acceptance and Digital Maturity

    Understanding how consumers adapt to emerging marketing technologies requires engaging with the theoretical frameworks developed to explain technological adoption more broadly. The Technology Acceptance Model (TAM), originally proposed by Davis (1989), posits that perceived usefulness and perceived ease of use are the primary determinants of technology adoption. While TAM remains foundational, recent research has complicated and enriched this model in several important ways.

    A 2025 ScienceDirect review of AI adoption in marketing found that organisational-level factors — including culture, infrastructure, and human resources — play an equally crucial role in adoption as individual attitudes, challenging the TAM’s traditional focus on individual decision-making.[7] Furthermore, the review challenged Innovation Diffusion Theory’s traditional model of innovation adoption as a linear process, arguing instead that ethical and regulatory complexities create non-linear adoption dynamics — with algorithmic bias concerns and data privacy regulations acting as significant adoption brakes that IDT’s original framework did not anticipate.[7]

    A study published in Administrative Sciences (MDPI) in November 2025, drawing on data from 650 Greek consumers surveyed between December 2024 and April 2025, identified trust and ethical perceptions as the dominant predictors of AI-based personalised advertising acceptance. Crucially, the research found that frequent digital engagement builds “digital maturity,” which makes consumers more receptive to algorithmic recommendations and personalisation systems.[18] This suggests a dynamic in which exposure to digital technologies, when managed ethically, can itself generate the trust necessary for further adoption — a virtuous cycle that brands can cultivate through transparent practice.

    The Role of Digital Transformation in Business Performance

    A September 2025 study published in Scientific Reports, drawing on structural equation modelling of 390 professionals across China and Kazakhstan, found that consumer engagement has the strongest influence on a company’s capacity for digital transformation (β = 0.418), followed by investments in digital technologies (β = 0.288).[19] This finding is notable because it inverts the common assumption that technology investment drives consumer engagement — the relationship, the data suggests, runs in both directions, with engaged consumers actually accelerating organisational digital capability.

    A PMC-published qualitative investigation into how digital technologies reshape marketing strategy confirmed that digital transformation is most effective when it enhances a company’s ability to provide more innovative and customised solutions — with virtual reality applications, for instance, reducing the need for physical field tests while improving product development timelines.[20] However, the research also noted that most companies are still at intermediate stages of digitalisation, with only a small cohort having completed the full transformation of internal procedures, organisation, and business model.[20]


    Section VII — Strategic Implications

    What Consumer Adaptability Means for Marketing Strategy in 2026 and Beyond

    The convergence of research reviewed here yields a set of strategic imperatives that are unusually consistent across methodology, geography, and sector.

    First, the centralisation of trust as a commercial asset. Across every category of emerging marketing technology — AI personalisation, immersive experience, omnichannel integration — consumer acceptance is mediated by trust. Brands that treat data transparency as a compliance burden rather than a competitive asset are, the evidence suggests, misallocating strategic attention. The Usercentrics research found that consumers are not rejecting data-sharing — they are demanding clarity, control, and proof of responsible use.[12] The Kantar Marketing Trends 2026 report confirmed that Generative Engine Optimisation (GEO) — ensuring brand presence and trustworthiness within AI recommendation systems — has become the new SEO, with 74% of AI assistant users regularly seeking AI-driven brand recommendations.[21]

    Second, the transition from efficiency to differentiation in AI strategy. The period in which AI created competitive advantage by reducing the cost of existing marketing operations is concluding. As Brinker’s analysis makes clear, the organisations that will lead are those that use AI to expand the range of what marketing can attempt — running more experiments, creating more personalised journeys, and generating insights at a scale and granularity that human teams alone cannot match.[8]

    Third, the non-negotiability of omnichannel integration. With 73% of consumers using multiple channels during a single buying journey, and with omnichannel brands retaining nearly three times the customer proportion of single-channel competitors, omnichannel strategy has moved from an advanced practice to a table-stakes requirement.[4] The significant performance gap between aspirational commitment (96% of brands cite CX importance) and practical implementation (fewer than 50% track customers cross-channel) represents the most actionable opportunity in the current MarTech landscape.[17]

    Fourth, the importance of ethical governance frameworks. Research across multiple studies confirms that concerns about algorithmic bias, data misuse, and inadequate regulation are significant barriers to consumer adoption of emerging marketing technologies.[7, 11] Brands that develop and communicate credible governance frameworks — not as marketing statements but as operational realities — are better positioned to benefit from the demonstrated consumer willingness to reward demonstrably trustworthy companies with loyalty, premium acceptance, and active advocacy.


    Conclusion

    Navigating the Paradigm Shift

    The digital paradigm shift this study has examined is not a future state — it is the present condition of the marketing function. Consumer adaptability to emerging marketing technologies is real, documented, and in several dimensions more advanced than marketing practitioners have recognised. Consumers are using AI-powered personalisation, navigating omnichannel journeys, and increasingly engaging with immersive brand experiences. What they are not doing is doing so unconditionally.

    The research consistently identifies a contingent relationship between technology adoption and institutional trust. Consumers will adopt emerging marketing technologies when the value exchange is transparent, the data governance is credible, and the brand relationship is genuinely reciprocal. They will resist — and increasingly punish — technologies deployed without those conditions, irrespective of the sophistication of the underlying platform.

    For marketers, this contingency is not a constraint to be managed. It is the most important strategic signal in the data: the organisations that will lead the next decade of marketing technology adoption are not necessarily those with the largest technology investments or the most sophisticated AI deployments. They are those that have earned the consumer trust that makes technology adoption possible.

    All sources reflect published research, primary surveys, and industry reports from 2024–2026. URLs correct as of March 2026.

    1. Grand View Research.Marketing Technology Market Size, Share & Trends Analysis Report.2025.grandviewresearch.com
    2. Precedence Research.Marketing Technology (MarTech) Market Size to Hit USD 3,286.94 Bn by 2035.2026.precedenceresearch.com
    3. HubSpot / TECHSPO.State of AI in Marketing 2025.As cited in TECHSPO Los Angeles MarTech Trends Report, December 2024.techspola.com
    4. Marketing LTB / Harvard Business Review / Invesp.Omnichannel Statistics for 2025: Data, Trends & Insights.October 2025.marketingltb.com
    5. Deloitte.2025 Connected Consumer Survey: Innovation with Trust.December 2025.deloitte.com
    6. Adobe.AI and Digital Trends 2026: GenAI and Agentic AI Insights.Fielded October–November 2025, published February 2026.business.adobe.com
    7. Rosário, A., et al.Artificial Intelligence (AI) adoption in marketing strategies: Navigating the present and shaping the future business landscape.ScienceDirect, September 30, 2025. doi:10.1016/j.ssaho.2025.00777
    8. Brinker, S. / CMSWire.6 Marketing Technology Trends to Watch in 2026.February 2026.cmswire.com
    9. Relyance AI / Truedot.ai.Customer AI Trust Survey: 82% See Data Loss Threat.December 2025.relyance.ai
    10. Victor-Nyebuchi, M.The Impact of AI-Driven Personalization Tools on Privacy Concerns and Trust in Social Media Marketing.SSRN, June 13, 2025. doi:10.2139/ssrn.5385173
    11. Acr-Journal.Balancing Personalization and Privacy in AI-Enabled Marketing: Consumer Trust, Regulatory Impact, and Strategic Implications — A Qualitative Study using NVivo.Advances in Consumer Research, October 14, 2025.acr-journal.com
    12. Usercentrics / Sapio Research.The State of Digital Trust in 2025.Surveyed May 2025; published November 2025.usercentrics.com
    13. Amra & Elma LLC.Top 20 Virtual Reality Marketing Statistics 2025.September 2025.amraandelma.com
    14. Treeview Studio.AR | VR | MR | XR | Metaverse | Spatial Computing Industry Statistics Report 2026.2026.treeview.studio
    15. WSI World.The Future of Marketing Strategy: 5 Predictions for 2025.August 2025.wsiworld.com
    16. International Journal of Marketing and Technology (IJMRA). Vol. 15, Issue 10, October 2025.ijmra.us
    17. WebEngage.2025 MarTech Trends: What to Expect and How to Stay Ahead.January 2025.webengage.com
    18. Papadopoulos, I., et al.Personalization, Trust, and Identity in AI-Based Marketing: An Empirical Study of Consumer Acceptance in Greece.Administrative Sciences, MDPI, 15(11):440, November 2025. doi:10.3390/admsci15110440
    19. Galiyeva, A., et al.Digital marketing tools and digital transformation capability as a factor in enhancing business performance in China and Kazakhstan.Scientific Reports, October 22, 2025.nature.com
    20. Cortez, R.M., et al.How digital technologies reshape marketing: evidence from a qualitative investigation.PMC / Journal of Business & Industrial Marketing, 2023.pmc.ncbi.nlm.nih.gov
    21. Kantar.Marketing Trends 2026.2025.kantar.com
  • Growing Up in the Age of Artificial Intelligence!

    Growing Up in the Age of Artificial Intelligence!

    In the autumn of 2025, Pew Research Center surveyed 1,458 American teenagers and found something that would have seemed extraordinary just five years earlier.

    A majority of U.S. teens now use AI chatbots including roughly three in ten who do so every single day.[1] They consult AI for homework, for creative projects, for emotional support, and simply for company. A generation is growing up not just alongside artificial intelligence, but intertwined with it. The question researchers, parents, and policymakers are urgently asking is: to what end?

    This is not a distant or theoretical concern. The evidence is accumulating in real time, across journals of pediatrics, psychology, education, and economics. A new study published in JAMA Network Open in February 2026 tracked the actual device usage of 6,488 American children between the ages of 4 and 17 and found that nearly a third had used generative AI applications on their devices, including 50% of teens aged 15 to 17, and, more strikingly, 9% of children as young as 8 or 9.[2] The technology has arrived in childhood. The next decade will determine what it leaves behind.

    of teens aged 15–17 use GenAI apps on their devices[2]

    daily teen chatbot users in 2025 vs. a negligible fraction in 2022[1]

    new jobs projected to emerge from AI by 2025, as 85M are displaced[5]

    The Classroom Transformed and the Risks That Came With It…

    The promise of AI in education is real and documented. Personalized tutoring platforms can adapt to a student’s pace, fill gaps that overworked teachers cannot, and open access to expert-quality feedback for students who might otherwise receive none. Research published in the Journal of Educational Psychology in 2024 found that AI-enhanced learning experiences meaningfully improved children’s science comprehension.[3] For millions of students in under-resourced schools, this democratization of knowledge could be genuinely transformative.

    But the same classroom tools carry a shadow. When AI does the intellectual work, it can quietly hollow out the cognitive struggle that makes learning stick. As one clinical psychologist noted at a 2025 UCLA policy forum: “AI can, by definition, do the work for you.”[6] Research is already beginning to identify what happens when it does. A 2024 study from HHAI flagged “unreflected acceptance” as a growing pattern students receiving AI-generated answers in physics without engaging in the problem-solving process that builds genuine understanding.[4]

    – Making Waves Education Foundation, 2025

    The equity dimension is particularly sharp. While 80% of American adults support AI safety regulations, 31 U.S. states had published guidance or policies for AI in K-12 education by December 2025 — leaving nearly two decades worth of students navigating this shift without consistent guardrails.[7] Students from low-income families and first-generation college hopefuls face a cruel paradox: AI could be their greatest equalizer, or, if they are left without guidance, the force that widens the gap further.

    The Mental Health Emergency No One Saw Coming!

    The American Academy of Pediatrics, the American Academy of Child and Adolescent Psychiatry, and the Children’s Hospital Association declared a national emergency in youth mental health in 2021. The warning signs that prompted that declaration have not eased. Pre-pandemic data showed teenagers spending more than seven hours daily on screens outside of homework; by 2023, Gallup found they were averaging nearly five hours a day on social media alone.[8] Into this landscape has arrived a new category of AI interaction — one that is qualitatively different from passive scrolling.

    Generative AI chatbots, and particularly AI “companion” applications, are designed to be responsive, warm, and endlessly available. For lonely adolescents — and loneliness among teenagers has been a documented public health concern for years — that combination can be powerfully appealing. Pew’s 2025 survey found that 16% of teens had used chatbots for casual conversation, and 12% had used them to seek emotional support or advice.[1]

    The clinical community is alarmed. In June 2025, the American Psychological Association issued a formal health advisory warning that the manipulative design patterns of AI companion software “may displace or interfere with the development of healthy real-world relationships.”[7] Publishing in JAACAP Connect, psychiatrist Samuel Ng outlined a new concern he calls the “agentic AI” problem: as AI systems become more autonomous, they gain the ability to “autonomously target adolescents across platforms… until the AI agent’s goal of human engagement is achieved” — doing so without any human in the loop, amplifying risks to self-esteem and healthy development.[8]

    — Brookings Institution, Center for Universal Education, 2026

    There are documented tragedies at the extreme end. Families have filed lawsuits alleging AI chatbots contributed to adolescent suicides.[9] While causation is difficult to establish in individual cases, the pattern demands the kind of systematic longitudinal research that the field has not yet had time to complete. As the Lancet Child and Adolescent Health noted in 2025, the field must urgently improve research methods for quantifying digital harms in youth.[9]

    The Job Market and the Broken Bottom Rung

    For older members of Generation Z and the generation now entering high school, the AI revolution is not merely a developmental concern — it is an economic one. Stanford’s 2025 AI Index report found that 78% of organizations are already using AI in at least one function of their work, up from 55% just one year prior.[10] The pace of change is dizzying, and the young are most exposed.

    A Harvard University study tracking 62 million workers across 285,000 American firms found that junior positions are “shrinking at companies integrating AI” since 2023, with researchers warning that AI is “eroding the bottom rungs of career ladders” by automating the routine intellectual tasks that entry-level employees traditionally handle.[10] LinkedIn’s own workforce analysts have echoed this concern, warning that the bottom rung of the career ladder is simply breaking.

    Meanwhile, Microsoft’s 2025 AI in Education report found that while over 60% of students have tried AI tools, many lack guidance on how to use them effectively and ethically.[10] A 2023 IBM study — whose projections are now arriving — forecast that 40% of the workforce would need to reskill within three years, most acutely in entry-level positions. Young people are entering a labour market that is changing faster than educational institutions can adapt.

    The Opportunity, Honestly Stated

    None of this is to say the picture is purely bleak. The World Economic Forum projects that while AI will displace 85 million jobs, it will also generate 97 million new ones.[5] McKinsey’s research suggests that individuals with strengths in “adaptability, coping with uncertainty, and synthesizing information” are better positioned to thrive.[10] These are learnable skills — but only with intentional preparation. AI fluency, critical evaluation, and human-centred judgment may be the defining competencies of the next workforce, and right now schools are still arguing about whether students should be allowed to use chatbots at all.

    Cognitive Development in the Age of Instant Answers.

    Perhaps the most profound and least-studied question is what sustained AI use does to a developing brain. Researchers at Harvard’s Graduate School of Education note that AI designed thoughtfully can support children’s learning — but that AI literacy is essential to ensure children understand what they are interacting with.[3] The risk is that, absent that literacy, children come to treat AI not as a tool but as something closer to a social partner or authority figure.

    Research published in 2025 in Computers and Human Behavior: Artificial Humans explored why children sometimes perceive — or fail to perceive — minds and intentionality in generative AI, finding that the anthropomorphic design of AI platforms makes younger children especially susceptible to what Brookings researchers have called “banal deception”: the conversational tone, emulated empathy, and carefully designed communication patterns that lead young people to confuse the algorithmic with the human.[7]

    This conflation, researchers warn, directly short-circuits children’s developing capacity to navigate authentic social relationships and assess trustworthiness foundational competencies for both learning and democratic participation.[7] The worry is not science fiction. It is the ordinary, daily experience of millions of children who are growing up in digital environments saturated with AI they are not equipped to critically evaluate.

    What Must Be Done.

    The research community, clinicians, and policymakers are not passive in the face of these findings. The EU’s Artificial Intelligence Act takes a risk-based approach, banning systems that pose unacceptable threats to fundamental rights, mandating transparency, and enforcing age limits for adult-oriented AI.[7] In the United States, 31 states have published guidance on AI in K-12 education — a meaningful start, but one that leaves students in 19 states without institutional direction.[7]

    As the JED Foundation’s 2025 Policy Summit concluded, progress must be “built intentionally, structurally, and with sustainability in mind,” moving beyond short-term interventions toward long-term systems of change.[11] That means curriculum reform that teaches AI literacy alongside reading and mathematics. It means mental health services that can keep pace with the novel harms being documented. It means career preparation that looks honestly at what the labour market of 2030 will actually demand. And it means, fundamentally, including young people in the design of the policies that will shape their futures — something researchers across the field are insisting upon with increasing urgency.[6]

    The Stakes Could Not Be Higher!

    The generation growing up today is the first for whom AI has always been present. They did not choose this. They did not vote for it. Whether artificial intelligence becomes a tool that expands their potential or a force that diminishes their development, their relationships, and their economic futures is not a question they can answer alone. It requires researchers, educators, policymakers, and parents to act — deliberately, urgently, and with the wellbeing of children as the singular measure of success.

    The technology is advancing. The question is whether our institutions will advance with it.

    Sources & Citations

    1. Pew Research Center. How Teens Use and View AI. February 24, 2026. pewresearch.org
    2. Maheux AJ et al. Generative Artificial Intelligence Applications Use Among US Youth. JAMA Network Open, February 2, 2026. doi:10.1001/jamanetworkopen.2025.56631
    3. Xu, Y. et al. Artificial Intelligence Enhances Children’s Science Learning from Television Shows. Journal of Educational Psychology, 116(7), 2024. doi:10.1037/edu0000889; Harvard Graduate School of Education, The Impact of AI on Children’s Development, October 2024.
    4. Lukowicz P. et al. Unreflected Acceptance: Investigating the Negative Consequences of ChatGPT-Assisted Problem Solving in Physics Education. HHAI 2024.
    5. World Economic Forum. The Future of Jobs Report. 2022. ETC Foundation, How AI is Shaping Teenagers’ Education & Career Development, August 2025.
    6. Center for the Developing Adolescent / UCLA. Our Youth’s Perspective 2025: AI & Public Policy. 2025. developingadolescent.semel.ucla.edu
    7. Brookings Institution. AI’s Future for Students Is in Our Hands. February 2026. brookings.edu
    8. Ng S. Navigating Adolescent Mental Health in the Age of Artificial Intelligence. JAACAP Connect, 13(1):13–16, 2025. doi:10.62414/001c.150329
    9. Nagata JM et al. Adolescent Health and Generative AI — Risks and Benefits. JAMA Pediatrics, 180(1):7–8, January 2026. doi:10.1001/jamapediatrics.2025.4502
    10. Stanford HAI. AI Index Report 2025. St. John’s University, How AI Impacts Students Entering the Job Market, 2025. stjohns.edu
    11. The JED Foundation. The Future of Youth Mental Health in the Age of AI: Insights from JED’s 2025 Policy Summit. October 2025. jedfoundation.org