The BBC article Are we trapped in our own web bubbles? and Eli Pariser's TED talk 'Beware online filter bubbles' are two resources that discuss how personalised search results could limit our access to new information.
Search engines play a major role in providing access to knowledge and information. The order of the links appearing in search results therefore has a significant impact on the types of information that will be accessed by the majority of people (witness how many people only ever use the first page - or even half page - of search results). Additionally, some search engines and social media sites have started to use personalised search results, which can prioritise results that are similar to pages we have previously viewed - thus forming a so-called 'search bubble' or 'filter bubble' that might limit our exposure to new views.
Despite this, there is still some debate over just how significant the filter bubble effect is. A 2015 study of Facebook data suggested the effect was minimal or non-existent - but the study itself was quickly criticised. Filter bubbles returned to the media spotlight after political events including the election of Donald Trump and the UK Brexit vote. The Guardian attempted to examine the effect in 2018, while the University of Illinois has an interesting page examining the effect and presenting an experiment you can try for yourself.
This can be a useful starting point for exercise 1.8, and also links closely to the IB Theory of Knowledge (TOK) course.
Internet censorship is a huge topic, and one that truly highlights the global nature of the ITGS course. It is also closely related to the IB TOK course.
As an introduction to this topic, asking students to discuss or research a little about censorship in their own countries (and their opinions of this) is often very englightening. The news articles below have been divided into general categories simply to facilitate navigation.
Increasingly search engines, social networks, and other web sites may also be asked to block access to certain content - either locally or globally. This is particularly significant because millions of users rely on these services to access information: the absence of a piece of content may well be taken as an indication that the content simply does not exist. The news articles below provide examples of this type of filtering:
The digital citizenship page covers some of the potential legal impacts of online behaviour.
These links may be helpful as examples of the types of 'Artificial Artists' that are currently available.
Wikipedia is often criticised for being "unreliable", but few criticisms go beyond "anybody can edit it". The resources below examine the demographics of Wikipedia's contributors and editors, and provide some insightful statistics that can be a great source of discussion in both TOK and ITGS lessons.
This can lead to some great TOK knowledge questions, including:
Policing a global web service such as Facebook or Twitter is clearly a difficult task, and there are many social impacts and ethical issues to consider. Most obviously, different countries, regions, and users have wildly different standards regarding what is acceptable and unacceptable. Content also spreads extremely quickly online, while new situations constantly arise, requiring companies to make quick policy decisions. Below are examples of situations where material has been removed (and sometimes reinstated) by social media sites. These issues are also a great opportunity to link ITGS and TOK, with many knowledge issues surrounding censorship and filtering.
In May 2017 a Facebook document was leaked which revealed their internal rulebook on sex, terrorism and violence. Finally, ITGS students might be surprised to learn who makes the decisions about removing content - The dark side of Facebook explains this.
Driverless or self-driving vehicles are often promoted as being safer than human drivers. However, there may be situations in which an accident is unavoidable. In these situations, how should a driverless vehicle be programmed to behave? Which course of action should it take if all have negative outcomes? And, of course, who takes responsibility for any damage that is caused?
This is a topic which links to ITGS and TOK. The ethical dilemma of self-driving cars (video) is a good introduction. Why Self-Driving Cars Must Be Programmed to Kill and Ethics of Self-Driving Cars are great articles that examine the topic in more detail.
In March 2018 an accident occured which was reportedly the first death caused by a driverless vehicle. The Uber self-driving car hit and killed Elaine Herzberg, 49, in Arizona. The human monitor in the car also failed to spot the pedestrian until seconds before the collision. Uber stopped all self driving experiments in the aftermath of the crash.
Velodyne, the company that produces the sensors for the cars, reported that the sensors were working correctly - suggesting a software issue may have been the cause. It was later reported that the car's sensors detected Herzberg, but chose not to swerve as it was uncertain about the nature of the obstacle.
This article discusses how artificial intelligence systems can demonstrate both gender and racial 'bias'. Of course, this bias stems from unrepresentative training data - such systems are better at recognising white males because they are given more images of white males as training data. The article and video could lead to some interesting TOK discussions such as 'Can machines be bias?' and 'can we ever escape bias?".