Membresía disponible a 10 USD /mes (descuento) ahorra 50% . Registrate y obtén 3 artículos SIN COSTO -

Membership available at 10 USD/month (discount), save 50%. Sign up and get 3 articles FREE.

Virtual Social and Educational Club for Children, Teenagers, and Young Adults in Spanish Virtual Social and Educational Club for Children, Teenagers, and Young Adults in Spanish
Digital ManipulationSocio-Digital Phenomena

Algorithmic manipulation: How does it affect our digital decisions?

Introduction: What are algorithms and how do they operate in the digital world?

Algorithms are sets of instructions or ordered steps that a computer follows to solve a problem or perform a task . gub.uy unesco.org . In everyday life, we can compare an algorithm to a cooking recipe: it indicates the exact sequence of steps to follow (ingredients, measurements, times) to obtain a specific result. gub.uy. Similarly, digital algorithms take input data (e.g., a user’s clicks, searches, or likes) and apply rules to generate useful output (e.g., displaying recommended content). gub.uy unesco.org . For example, predictive AI platforms use algorithms that analyze user data to “predict” an answer before you finish typing a question into a search engine, or to suggest movies and products that might interest each person. gub.uy .

On digital platforms (social media, search engines, online stores, streaming services, apps, etc.), algorithms act as “invisible editors” that decide what information appears on our screens and in what order. As Eric Risco de la Torre explains, social media algorithms are “systems designed to analyze large amounts of data and offer each user a personalized experience.” These systems determine which posts appear in our feeds and how we interact with the platform, all based on our preferences and previous behaviors ( ethic.es ). Similarly, search engine algorithms like Google or recommendation algorithms (YouTube, Netflix, Spotify) track our browsing history, clicks, viewing time, or ratings, and use that information to anticipate which results or content will be most relevant to us in the future (gub.uy ethic.es) .

This widespread use of algorithms in our digital lives means that even the most personal decisions can be influenced (or even directed) by these computing machines. In this informative article, we will explore how algorithms affect us in different areas—from online shopping and entertainment to politics, health, and our lifestyles—and what key concepts derive from this algorithmic personalization . We will also review concrete examples from well-known platforms (Google, TikTok, Facebook, Amazon, Netflix, etc.), analyze research and real-life cases on the effects of this manipulation, and discuss measures to counter it. The goal is to provide an in-depth and accessible look at how the “black box” of algorithms shapes our digital experiences and daily decisions, fostering an algorithmic awareness that allows us to act with information and caution.

Algorithmic influence on consumption and entertainment

In the field of digital consumption (online shopping, streaming, music, etc.), algorithms play a central role in personalizing and automating our user experience ucr.ac.cr . Platforms such as Amazon , Netflix , Spotify , and Google Play use AI-based recommendation engines to analyze our behavior (products viewed, favorite genres, search history, ratings) and automatically suggest other products or content that they think we’ll like. For example, Jeff Wilke (then CEO of Amazon) explained that “At Amazon.com, we use recommendation algorithms to personalize the online store for each customer. The store changes radically based on the customer’s interests” blog.blendee.com . This means that two users with different profiles will see different products on the Amazon homepage: a programming enthusiast will be shown computer books, while a new mother will be presented with toys and baby products blog.blendee.com . According to a study cited by the company itself, up to 35% of Amazon sales could be attributed to these personalized algorithmic recommendations blog.blendee.com (although this figure comes from a McKinsey trade report cited in a blog, it reflects the importance of the phenomenon).

Similarly, video and music streaming services leverage their algorithms to define our entertainment consumption. One example is Netflix, where recommendation algorithms analyze which series or movies we’ve watched to present similar ones on the homepage. As an academic from the University of Costa Rica explains, platforms like Spotify, YouTube, Facebook, and Netflix “use these types of tools to study their users’ behavior and recommend content that the algorithm interprets as likely to be of interest to them” ucr.ac.cr. This may be convenient because it saves us searching time, but it also means we rarely discover suggestions outside of our common tastes. By always focusing on what we “think” interests us , these systems tend to reinforce consumption patterns: we watch more of the things similar to what we already consume, rather than showing us variety. So, for example, if someone watches science fiction documentaries on Netflix, the platform will show them more similar stories. In shopping, if someone tends to buy technological gadgets, Amazon will tend to offer them more gadgets. This circle of recommendations can increase sales and usage time , but it also restricts diversity : it only exposes us to niches and products that have already been profiled.

Another relevant effect in the consumer space is the optimization of prices and offers through algorithms. Although less noticeable to the end user, there is evidence that online retailers (including Amazon) use algorithms that analyze price competition and purchasing behavior to dynamically adjust prices and promotions blog.blendee.com . This can mean, for example, that the same item of clothing has a different price for different users based on their browsing history, overall demand, or geographic segmentation.

In short, in the realm of digital consumption, algorithms shape what we see and how we navigate the market : they determine which products appear first in online store search engines, which movies or songs are recommended to us, and even in what order of priority. While this facilitates the experience (quickly finding what we “probably like”), it also involves a subtle manipulation of our decisions: the algorithm promotes purchases or viewings that increase the platform’s profits, and we might settle for what they suggest without exploring alternative options.

Algorithms and Political Decisions: News, Opinions, and Polarization

Social media algorithms (Facebook, Twitter, Instagram, TikTok, YouTube, etc.) modify the information we consume about the world , and can therefore influence our political ideas, opinions, and even our voting habits. One of the most documented characteristics is the creation of “filter bubbles” and “echo chambers” in the consumption of news and ideological content. The concept of a filter bubble was coined by cyberactivist Eli Pariser and describes how personalization algorithms isolate users from information contrary to their views. In the words of a Costa Rican specialist, “A filter bubble is the result of a personalized search where a website’s algorithm selects, through predictions, the information that the user would like to see” camtic.org . This is based on user data (location, search history, previous clicks), and as a result “[users] are steered away from information that doesn’t fit with their views, effectively isolating them in ideological and cultural bubbles” camtic.org . A classic example is Google: two people searching for “BP” can receive very different results if one is looking at business news and the other is reading environmental media camtic.org .

Related, the term echo chamber refers to when we only receive information that reinforces our existing beliefs, and dissonant content is filtered out. For example, Internet Matters defines an echo chamber as “a situation in which people only see information that supports their current beliefs and opinions” internetmatters.org . On social media, this happens because the algorithm “hides” content it considers irrelevant (which we don’t engage with) from our feed and prioritizes what generates engagement. The problem is that the content that is missed could offer alternative and balanced points of view. As a result, the algorithm confirms a cognitive bias: “The content users see confirms their beliefs by presenting different points of view” internetmatters.org . In other words, if we tend to see and share news from a particular political position, we will increasingly receive that type of news, and less from the other side.

This phenomenon of bubbles and echo chambers has concrete consequences in the political sphere. A paradigmatic case of algorithmic and data manipulation was the Cambridge Analytica scandal of 2016. It was shown that this electoral consulting firm obtained (irregularly) personal data from 50 million Facebook users and used it to profile voters for political purposes (elpais.com ). While this case primarily involved data collection (through a psychological testing program), part of the strategy consisted of using that data with algorithms to show each person ads and political content designed to convince them. In the words of Mareike Möhlmann (Harvard Business Review), it has since become clear that targeted advertising and highly personalized content on Facebook can “not only push users to buy more products, but also coerce and manipulate them to vote for certain political parties (hbr.org ). That is, microsegmentation algorithms allow a political message to be tailored to each individual, exploiting their biases and emotions to influence their voting decision.

Beyond specific examples of voter fraud, academic research has found that recommendation algorithms can reinforce polarization. A study from the University of California, Davis analyzed YouTube recommendations based on a user’s ideology ucdavis.edu . They found that “YouTube’s algorithm does not protect people from content across the political spectrum ,” but instead tends to recommend videos that align with the user’s ideology. For right-leaning users, recommendations frequently came from channels that promote conspiracy theories and political extremism ucdavis.edu . In contrast, left-leaning users viewed less problematic content. In the words of the researchers, YouTube “recommends videos that are more in line with the user’s ideology, and for right-leaning users, those videos often come from channels that promote extremism” ucdavis.edu . This effect is colloquially known as the “rabbit hole ,” where the algorithm leads the user down increasingly extreme tunnels of content.

Another dangerous phenomenon is the spread of political disinformation through algorithms. This doesn’t only occur in video: Facebook, Twitter, and Google can amplify fake or manipulated news if they detect that it generates clicks and reactions. As several analysts point out, this can create networks of disinformation that spread rapidly among like-minded ideological groups. Although some argue that users are responsible for what they read, the design of platforms with opaque algorithms makes it difficult to discern the truth. Therefore, we talk about algorithmic manipulation in the sense that the platforms themselves (sometimes without malicious intent) can indirectly influence public opinion based on how they optimize user attention and their advertising revenue.

In short, in the political arena, algorithms largely decide what ideas and news we encounter on the internet. By personalizing content based on what they think we like, they tend to lock us into our preconceived notions (filter bubbles and echo chambers camtic.org internetmatters.org ) and can amplify polarizing or false messages. Real-life cases, such as Cambridge Analytica elpais.com or YouTube studies ucdavis.edu , show that algorithms have the potential to manipulate our information consumption patterns and, consequently, our voting decisions .

Algorithms and Health: Medical Information and Well-being

Algorithmic influence also extends to the health and wellness space . More and more people are searching for medical information on Google, YouTube, forums, or health apps, and algorithms influence what they find and how they interpret it. For example, autocomplete and prioritization of results in search engines can skew diagnoses. If we search for symptoms, Google may suggest more common (and benign) diagnoses first, or prioritize sponsored or popular articles. This gives a sense of certainty that may not be reliable. Similarly, on social media and video sites like YouTube, algorithms have been observed to surface undesirable health content. During the COVID-19 pandemic, YouTube algorithms were documented as boosting speculative anti-vaccine videos ahead of the arrival of vaccine doses . grupogemis.com.ar . According to a cited study, numerous videos promoting anti-vaccine sentiment emerged in a climate of uncertainty, forcing YouTube to adopt “measures” in conjunction with the World Health Organization to promote verified information (grupogemis.com.ar ). This illustrates how an algorithm, by prioritizing content with high audience retention (even if it contains misinformation), can harm public health by spreading dangerous advice.

Furthermore, recommendation algorithms can indirectly influence healthy or risky habits. A recent example is YouTube’s own policy related to adolescent mental health. In 2024, the company announced that it would globally limit the recommendation of fitness and body image videos to teenage users . xataka.com.mx . This measure arose after research showed that repeated exposure to idealized images of “perfect” bodies (such as extremely thin or muscular models) can damage self-esteem and lead to eating disorders in young people. xataka.com.mx xataka.com.mx . Algorithms previously promoted many clips from “fitness” or “ideal beauty” influencers because they generated engagement; now, that approach is being reconsidered. This example, covered by Xataka, shows that the same algorithmic personalization mechanisms that previously brought the most attractive content to young people can also lead them to set unrealistic standards. Recognizing the problem, YouTube decided to adjust its algorithm to protect the emotional health of teenagers . xataka.com.mx .

Outside of social media, there are health apps and devices (fitness trackers, smartwatches, etc.) that use algorithms to suggest diet, exercise, or meditation. These recommendations can be beneficial if well-designed, but they can also pose dangers if left unsupervised. For example, a fitness app might assess your exercise level and encourage you to exercise at levels that are too high for your condition, or a mental health platform might automatically reinforce certain emotions. The problem is that, as long as users don’t fully understand the inner workings, they tend to assume that algorithmic advice is “objective.” However, as experts warn, algorithms can contain biases or programming errors and don’t always provide optimal advice.

In short, algorithms impact our health decisions both by filtering the medical information we consume (search results, videos, advice) and by offering suggested habits. Real-life cases confirm the risks: during the pandemic, algorithm-driven health misinformation accelerated (grupogemis.com.ar ), and the harmful role of online body image content in young people (xataka.com.mx ) has been recognized . At the same time, tech companies are also beginning to use their algorithms to promote better health (e.g., by limiting harmful content). This demonstrates the enormous influence algorithms have on the medical information and recommendations we receive, with direct effects on our well-being.

Algorithms in lifestyle, culture and social consumption

Beyond individual consumption and politics, algorithms also affect decisions and trends in our everyday lifestyles and culture . In the realm of popular culture and social habits, platforms like TikTok , Instagram , Twitter , and Netflix influence fashions, musical tastes, humor trends, and even how we relate to others. A recent bibliometric study concluded that the social network TikTok has a notable impact on user behavior in terms of consumption, political orientation, and even study strategies ojsintcom.unicen.edu.ar . In particular, the authors point out that TikTok not only sets global entertainment trends but also influences users’ “brand perceptions” and “consumption decisions” ojsintcom.unicen.edu.ar . This translates into phenomena such as a viral dance on TikTok boosting demand for a certain song on Spotify, or an Instagram trend challenge making a certain type of clothing or food fashionable.

Streaming platforms, for their part, shape our leisure time: Netflix not only recommends series, but sometimes sends personalized emails based on the genre we watch. Spotify generates “weekly discovery” playlists that orient our musical tastes toward what the algorithm suggests. YouTube mixes music videos with tutorials or news on our screens. In all these cases, the algorithmic filter selects what it considers relevant, and this affects what cultural or entertainment information we consume and share. Thus, global trends (a funny video, a viral trend, an inspiring speech) tend to spread quickly thanks to the algorithm placing them first in several feeds simultaneously.

Likewise, messaging apps and social media change how we spend our free time. Instagram, for example, by showing us images of exotic trips or recipes in Explore (based on our like history), can subtly influence our vacation aspirations or diets. Facebook shows local events based on what it believes will appeal to us, influencing leisure plans. Algorithms can even motivate lifestyle challenges: a fitness YouTuber can inspire thousands of users to try a new sport, or a viral user who teaches vegan recipes can motivate her followers to change their diet.

The combined effect is that we live “locked in” to the preferences that algorithms encode. Researcher Magdalena Wojcieszak noted that “there are no neutral algorithms” in this sense: algorithmic recommendation tends to amplify content that matches what the user already sees, excluding what challenges them ucdavis.edu . Therefore, our everyday cultural and social choices—what movie I watch, what music I listen to, what places I want to visit—are increasingly mediated by automated systems. While it may seem like we are still choosing freely, the reality is that our screens—and the algorithms behind them—present us with pre-designed options.

Key concepts in algorithmic manipulation

To better understand these phenomena, it is useful to review some essential concepts:

  • Personalization : This is the adaptation of content to each user’s interests, based on their data. As we’ve seen, algorithms examine our clicks, searches, or comments to personalize news feeds, online stores, and recommendations ethic.es ucr.ac.cr . While this provides us with relevant content, it also means that each person sees a different “version” of the platform: what a friend sees on TikTok may be different from what we see, even with the same hashtags.
  • Filter bubbles and echo chambers : We already mentioned that filter bubbles isolate us from diverse information (camtic.org) , and echo chambers force us into closed circles of homogenized content (internetmatters.org) . In simple terms, an echo chamber is when the algorithm only shows us content that confirms what we believe (internetmatters.org ), reinforcing our beliefs without counter-intelligence. This limits open debate and can radicalize opinions.
  • Algorithmic nudging : comes from the nudge theory of behavioral economics. It refers to how elements of digital design (colors, button placement, order of options) subtly influence our decisions. When an algorithm “nudges” us toward a preferred behavior (such as purchasing a certain offer or watching a specific video) by manipulating the choice environment, we talk about algorithmic nudging. Möhlmann points out that companies use algorithms to “manage and control people… guiding them toward desirable behavior” in subtle ways hbr.org . An example is suggesting the most expensive payment option by default, or showing pop-up notifications to keep us engaged in an app.
  • Algorithmic biases : These are systematic biases or errors in the algorithm, which can generate unfair or discriminatory results. These biases often originate in the training data (for example, if a recruitment algorithm has “learned” from backgrounds where male candidates predominate, it will tend to favor male profiles). Although this issue goes beyond decision manipulation, it is related, as an inadvertent bias can bias toward products or content that favor certain groups over others.
  • Dark patterns : While not exactly an algorithmic phenomenon, they deserve a mention. They are interface design techniques that, using psychology, cause the user to perform unwanted actions (such as subscribing to something they don’t want). In combination with algorithms, a dark pattern could, for example, hide the option to unsubscribe or generate accidental clicks.

Together, these concepts show how algorithms can manipulate our behavior in very subtle ways . It’s not that there’s always a magic “buy button”; rather, it’s a matter of multiple impulses: compelling content, default options, highlighting of certain suggestions—all aligned to lead us to decide one thing or another. When we’re unaware of these mechanisms, we hardly question the information presented to us and, as experts say, “we become interactants of algorithms rather than interacting citizens” (redalyc.org ).

Real examples and studies on the impact on users

Academic literature and media research have documented several cases that illustrate the effects of algorithmic manipulation:

  • In Chile , a study by academic Cristian Buzeta observed that citizens spend almost half of their day in front of digital screens (internet, TV, social media), where algorithms transmit personalized content for persuasive purposes vti.uchile.cl . His research project focuses on “algorithmic persuasion,” understood as the strategic use of algorithms to maximize the impact of targeted messages. Buzeta defines algorithms in digital media as “a set of automated instructions that determine what content is shown to each user and how” vti.uchile.cl . According to this project, those with “algorithmic awareness” (who know how they work and recognize them) tend to question those recommendations themselves, thus reducing their manipulative influence unesco.org .
  • In the United States , in addition to the Cambridge Analytica case cited above, there are studies on social media and elections. Research has investigated how Facebook and other platforms favor polarizing content (e.g., fake news during election campaigns). For example, fact-checking organizations have called on Facebook and YouTube to implement stricter controls during the 2020 pandemic to curb misleading videos about COVID-19 . grupogemis.com.ar . Although companies are reluctant for fear of censorship, this type of pressure demonstrates social recognition of the problem.
  • In the academic field , several studies have audited algorithms. In addition to the UC Davis YouTube case ucdavis.edu , international research has detected biases in search and recommendation algorithms. While some recent studies (e.g., the University of Pennsylvania) question whether algorithms alone automatically radicalize people, most agree that adaptive systems reinforce the user’s preexisting biases and can create one-sided information spirals (especially among young people).
  • Examples of reactions to the problem have also emerged. For example, in 2021, MIT researchers created a test bot that demonstrated how a retail recommendation system (for clothing) could guide users to pay more for the same product by subtly changing the order of options hbr.org . This and other experiments reveal that tiny changes to the interface, combined with predictive algorithms, can double the likelihood that a user will choose a specific option .
  • The aforementioned bibliometric study on TikTok ojsintcom.unicen.edu.ar included 29 relevant articles published through 2024. It concluded that TikTok “has an influence on global trends and its effect on consumers” ojsintcom.unicen.edu.ar . Although oriented toward scientific research, this reinforces the idea that algorithmic reach is global, affecting everything from fashion to news perception.

Taken together, these cases and studies confirm that algorithmic manipulation is not just theory . The fact that academic bodies, media outlets, and NGOs are studying the phenomenon indicates that the recommendations and content we see (whether a news article, a product on Amazon, or a video on TikTok) may be designed with persuasive strategies in mind. Recognizing these real-life cases helps us understand the risks: what we think of as “neutral information” is often mediated by commercial or ideological motives.

Proposals and solutions to address algorithmic manipulation

Faced with these challenges, various experts and organizations have proposed strategies to mitigate algorithmic manipulation:

  • Education and algorithmic literacy. A key measure is to increase users’ algorithmic awareness —that is, to teach them how these systems work and what effects they have. As UNESCO points out, people with greater algorithmic awareness “search for news on their own and do not trust the algorithm’s judgment ,” which makes them less vulnerable to misinformation (unesco.org ). Similarly, educators warn that algorithmic literacy is as basic as learning to read and write today (redalyc.org ). Without these skills, we are like illiterates in a world of digital information, exposed to inadvertent manipulation. Incorporating digital critical thinking courses in schools and adult training workshops can help people distinguish algorithmically curated content from objective information.
  • Platform transparency. Another key solution is to demand greater transparency about how algorithms operate. This means tech companies should clearly state what criteria the algorithm uses to display content and give users the option to disable personalized recommendations if they wish. For example, the recently passed European Union Digital Services Act (DSA) requires giants like Amazon, Google, Meta, and TikTok to disclose certain algorithmic practices. According to Maldita, this regulation requires transparency measures in recommendation algorithms, content moderation, and advertising. maldita.es . In practice, this means that users will be able to request an explanation if they receive a content restriction, or will have tools to “reset” recommendations on their social media platforms. maldita.es maldita.es . At the international level, there are also initiatives for ethical codes and independent audits: for example, UNESCO and scientific organizations have published guiding principles (ethics, transparency, human oversight) for artificial intelligence that should guide the design of algorithms. unesco.org unesco.org . Applying these principles to platforms means, for example, that systems must be periodically audited to correct biases, and that developers must explain in an understandable way why they offer certain content (the principle of explainability).
  • Regulation and Public Policy. In addition to the DSA, several countries and blocs are advancing laws on AI and algorithms. The EU presented its “AI Regulation” (pending in 2023) to govern the highest-risk AI systems (mandatory evaluation, limitations on mass surveillance, etc.). Other countries are planning similar frameworks. The idea is to hold companies legally liable for harm caused by their algorithms (e.g., information poisoning or discrimination). It also calls for an independent regulatory body (or each country’s telecommunications supervisor) to monitor algorithmic practices on networks and ensure that fundamental rights (freedom of expression, privacy, non-discrimination) are not violated. In some cases, even a “public algorithm registry” is proposed, where companies must register their most powerful recommendation systems.
  • Ethical design of algorithms. In addition, there is a case for incorporating ethics from the algorithm’s creation (“privacy by design” or “ethics by design”). This implies that user experience (UX) engineers and designers consider social impacts from the outset: for example, by avoiding deliberately creating bubbles or biases. As UNESCO emphasizes, an “ethical compass” is needed in AI development because these technologies, without ethical barriers , run the risk of reproducing prejudices or fueling social divisions unesco.org . In practice, an ethical algorithm must preserve the diversity of information, explain its rationale (avoid the “black box”), and prioritize the good of the user over profit. Some companies are already testing solutions: YouTube, following criticism over medical misinformation, now prioritizes official sources in health-related searches; Facebook stopped automatically recommending extremist groups. These are signs that ethical design can make a difference.
  • End-user options. Finally, users themselves can take precautions. It’s helpful to diversify information sources (not relying solely on what appears in a single feed), adjust privacy settings (e.g., clear cookies or history frequently), and read critically. Some suggest using browsers or tools that block tracking and personalization, or switching between different devices or accounts to occasionally “reset” algorithms. Although this requires effort, it increases awareness: cross-referencing information (e.g., comparing search results across browsers) makes personalization bias evident.

In short , tackling algorithmic manipulation involves combining education (so that users understand what’s behind each screen) with regulation (holding companies accountable) and design ethics (prioritizing social well-being). Proposals range from school changes and public campaigns on digital literacy redalyc.org unesco.org , to legal norms such as the DSA maldita.es and international codes of algorithmic ethics unesco.org unesco.org . By joining these efforts, it is possible to reduce the harmful influence of algorithms on our decisions and regain control over our digital experience.

References

  1. Agency for Electronic Government and the Information and Knowledge Society (AGESIC). (2021, February 4). Understanding what algorithms are and how they work . Retrieved from the AGESIC website gub.uy.
  2. Jordán, D., Izaguirre, JA, & López, AL (2024). The social network Tik Tok and its impact on behavior change: a bibliometric study . Intersections in Communication , 2(18). https://doi.org/10.51385/jzxcbs79:contentReference[oaicite:69]{index=69}.
  3. Llano Neira, P. de. (2018, March 18). A consulting firm that worked for Trump manipulated data on 50 million Facebook users . El País . https://elpais.com/internacional/2018/03/17/estados_unidos/1521308795_755101.html:contentReference[oaicite:70]{index=70}.
  4. Cursed Technology. (2024, January 2). Everything about the Digital Services Act (DSA): what it is, how it affects us, and how it is being implemented . Maldita.es . Retrieved from https://maldita.es/malditatecnologia/20240102/ley-servicios-digitales-dsa-como-se-esta-implementando/:contentReference[oaicite:71]{index=71}:contentReference[oaicite:72]{index=72}.
  5. Peckham, S. (2023, April 13). What Are Algorithms? How to Prevent Echo Chambers and Keep Kids Safe Online . Internet Matters . Retrieved from https://www.internetmatters.org/en/hub/news-blogs/what-are-algorithms-how-to-prevent-echo-chambers/:contentReference[oaicite:73]{index=73}:contentReference[oaicite:74]{index=74}.
  6. Risco de la Torre, E. (2024, December). What’s really behind social media algorithms? Ethics . Retrieved from https://ethic.es/2024/12/que-hay-realmente-tras-los-algoritmos-de-las-redes-sociales/:contentReference[oaicite:75]{index=75}.
  7. Russell, A. (2023, December 13). YouTube Video Recommendations Lead to More Extremist Content for Right-Leaning Users, Researchers Suggest . UC Davis News . https://www.ucdavis.edu/curiosity/news/youtube-video-recommendations-lead-more-extremist-content-right-leaning-users-researchers:contentReference[oaicite:76]{index=76}.
  8. Soto, MG (2017, September 18). The Internet Filter Bubble . CAMTIC . Retrieved from https://www.camtic.org/hagamos-clic/la-burbuja-de-filtros-en-internet/:contentReference[oaicite:77]{index=77}.
  9. UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence . Retrieved from https://www.unesco.org/es/artificial-intelligence/recommendation-ethics:contentReference[oaicite:78]{index=78}.
  10. UNESCO. (2024, May 13). Algorithms and News: Why an Algorithmic-Aware Society Is Important to Combat Disinformation . UNESCO . Retrieved from https://www.unesco.org/es/articles/algoritmos-y-noticias-por-que-tener-una-sociedad-con-conciencia-algoritmica-para-enfrentar-la:contentReference[oaicite:79]{index=79}.
  11. Aparici, R., Bordignon, A., & Martínez-Pérez, JR (2021). Algorithmic literacy based on Paulo Freire’s methodology . Educational Profiles , 43(Esp.), 36-54. https://doi.org/10.22201/iisue.24486167e.2021.Especial.61019:contentReference[oaicite:80]{index=80}.
  12. Möhlmann, M. (2021, April 22). Algorithmic nudges don’t have to be unethical . Harvard Business Review . https://hbr.org/2021/04/algorithmic-nudges-dont-have-to-be-unethical:contentReference[oaicite:81]{index=81}.

 

Orlando Javier Jaramillo Gutierrez

Entrepreneur, Technologist, Founder-Director of Asperger for Asperger. Writer of books for the autism spectrum community. Certified in Cybersecurity and Data Science by Google and IBM. Editor and Author: Technology Education: The Magazine
Back to top button