Comment on Privacy Is A Privilege? by helterskelliter


Thank you for your compliments! I think it’s crazy how demonized some things have become, especially things like the Internet that can be used for great public good if properly utilized. That said, I understand that it’s hard to provide free access to a resource like the Internet. People need to be properly compensated for keeping the Internet and its operations running. Idk how we’e going to resolve THAT problem though so….

Again, thanks for sharing your thoughts!


Comment on Developing Digital Literacy (One Video at a Time)~ by helterskelliter


I think it’s interesting to consider what the Internet would be if it were totally metered, pay-to-play for consumers. I’m reminded of Do Not Track’s second episode where they ask us how much we’d be willing to pay for different services like Facebook and Google. As it stands, Google makes more money selling my data to advertisers than I’d be willing to pay for the service so…. there are a few guilty parties here (some obviously more than others but…)

I think becoming empowered about these issues is going to require us all being knocked down a few pegs. More, we may need to reconsider our conceptions about the Internet and its purposes.


Following the Crumbs Can Be Crummy…

This post is late because of a certain snarky blog poster’s birthday this weekend #24on24



This week, we explored the nature of truth in online spaces. Often, online, false news or misinformation spreads more rapidly and much further than facts or more honest news or truthful reporting of information. With this being the case, how is one to navigate online spaces and make decisions about the truth or “fakeness” of a source? Can there be any fake or real news in a place like America that has become so divided? More, does truth even matter anymore when it so easy to make up information that supports a false narrative and or straight-up choose to believe in facts or not?

Personally, I believe the truth will always matter. I believe it is important to question information and think critically about where information is coming from but I ultimately do believe that there are facts and indomitable truths. Maybe they’re not Plato’s capital “T” Truths but there are true things/people/facts out there. It is important to believe in the reality of the truth, to me, because if we can’t agree on a set of truths, then we can’t have a meaningful discussion. We could only engage in arguments–which seldom resolve problems.

What is causing division in this country, in my opinion, is a lack of faith in news organizations and traditionally heralded, respected sources of information. This lack of faith, I believe, is being caused largely by political pundits and agents of a particular political agenda who benefit substantially from the spread of misinformation and from generating distrust towards facts and critical information. How do we circumvent this, though? How do we identify misinformation online? And, more, how do we get people to care about misinformation?

That latter question may be more challenging to answer but I did come across some sources that talk about fake news and identifying misinformation online. One of them is a news articles by The New York Times. In the article, “Evaluating Sources in a ‘Post-Truth’ World: Ideas for Teaching and Learning About Fake News” some strategies are provided for navigating misinformation online. More, how false information spreads is analysed and discussed. Two other articles quoted in this article discuss the issue of fake news at length. One of the articles is “As Fake News Spreads Lies, More Readers Shrug at the Truth” and the other is “How Fake News Goes Viral: A Case Study” Both of these articles talk about more specific details about fake news and how its spread operates in online spaces. The second article seems to use Mike Caulfield’s “Four Moves” method in order to determine whether or not a specific example (i.e that fake protesters were being shuttled to Trump rallies) is fake or not (It is–no one needs to be paid to protest the guy >.>). 

Anyway, I think these articles are good sources to provide to our field guide for navigating the web. They elaborate more upon the problem of fake news in our Internet landscape and provide examples for navigating this complex and complicated landscape.

I’d give these sources about 7/10.


~Till next time~

AI Isn’t So Bad Afterall!

abstract art blur bright
Photo by Pixabay on

For this week’s Fieldguide Article, I decided to use the article my mother sent to my family last week about artificial intelligence (AI). 7 Predictions for Artificial Intelligence in 2019 by Salil Sethi talks about the lighter side of AI and the impact it will have on society in new ways. Sethi makes great points and references about AI and what companies will go through in the near future. Here are the points I found to be important in each section:

1. Machine learning as a service (MLaaS) will be deployed more broadly

  • MLaaS: Technology powerhouses like Google, Microsoft, and Amazon.
  • MLaaS: Sold primarily on a subscription or usage basis by cloud-computing providers.

2. More explainable or “transparent” AI will be developed 

  • AI continues to carry the “black box” burden, posing a significant limitation in situations where humans want to understand the rationale behind AI-supported decision making.
  • AI that can clearly document its logic, expose biases in data sets, and provide answers to follow-up questions.
  • Humans need to know that the technology can perform effectively and explain its reasoning under any circumstance.

3. AI will impact the global political landscape

  • Countries that have AI talent and machine learning capabilities will experience tremendous growth in areas like predictive analytics, creating a wider global technology gap.
  • More conversations will take place around the ethical use of AI. Naturally, different countries will approach this topic differently, which will affect political relationships.
  • AI’s impact will be small relative to other international issues, but more noticeable than before.

4. AI will create more jobs than it eliminates

  • A new type of role is emerging: humans are needed to support AI implementation and oversee its application.
  • More manual labor will transition to management-type jobs that work alongside AI.
  • In two years, AI will create 2.3 million jobs while only eliminating 1.8 million.

5. AI assistants will become more pervasive and useful

  • AI assistants will become more pervasive and useful requests and completing tasks.
  • With advances in natural language processing and speech recognition, humans will have smoother and more useful interactions with AI assistants.

6. AI/ML governance will gain importance

  • More organizations will create governance structures and more clearly define how AI progress and implementation are managed.
  • These structures will be tremendously important as humans continue to turn to AI to support decision-making.

7. AI will help companies solve AI talent shortages

  • Companies will use AI to find talent and fill job vacancies and propel innovation forward.
  • The biggest challenge companies are facing related to using AI is a lack of available talent.
  • And as technological advancement continues to accelerate, it is becoming harder for companies to develop talent that can lead large-scale enterprise AI efforts.

I would give this article a 10 on the rating scale. It’s relevant because AI is something that can be not only intimidating but scary as well. This article, however, brings out the lightness of AI in order to show the benefits that could come out of it.

What Color Duct Tape Should Go On My Webcam? 🤔

There Is No “NO” Button…

This post may be late because it’s a certain snarky blog poster’s birthday on Sunday, the 24th….


Welcome back to the hellscape ^.^ This week, we’re exploring the circumstances that led to a post-truth Internet and the creation of a platform that is responsible now, more than ever, for spreading more “fake” content than real.

Strap in!

There Are Only “Okay” Buttons

In this day and age, I think it’s a given for most of us to believe that more than half of what we see online is fake. At the very least, we don’t necessarily believe that the content we encounter online has a high truthiness factor. This may be exclusive to younger generations but I do think it is a growing sentiment, regardless of political or social leanings in many case. No one believes everything they see online anymore.

But, why?

This week, we explored some of the strategies people can use in order to determine whether or not a source of information is credible. One of the methods we explored is Mike Caulfield’s “Four Moves“. I consider this a “work backwards” method. Essentially, before considering how truthful information is, you should look at the context in which this information exists–Are there other sources cited within the source? Are there other credible publications put out by this source? Can claims made within be verified by other sources? No? If not, why? To me, these all seem like basic moves one makes while conducting thorough and rigorous research. But, as we can see in this analysis of a suspect photo, these steps are apparently not so obvious.

Then why do so many people think the Internet is so fake if this kind of rigorous inspection of information is not so common?

Personally, I believe it is because of the recent and rigorous work of others done in exposing cover-ups both online and IRL that has made people more suspicious in this age. Also, I think political leanings have served to make people suspicious of all information they come across online, especially if it contradicts their world view and regardless of whether or not it comes from a credible source. We are living in “shady” times and I think the Internet has been used in the service of being shady but has also served as a microscope through which to inspect this shady activity.

Anyway, like being tracked online, I think this idea that the Internet is fake is a concept many of us now take as a given and, really, have come to expect. We don’t necessarily all remember a time when the Internet was a place where you could be fake and it didn’t matter. Which, is another aspect of this issue: the idea of being fake online is almost entirely associated with nefarious activity or with this sense of wrongdoing. Basically, if you aren’t you online, the same you you are IRL, then you have something to hide or you are purposefully trying to fool people into believing you are something you are not. There’s no playfulness or idea of experimenting with identity anymore. (Well, I do think some of that is coming back but I’ll save that discussion for a future post.) I think our jadedness with the post-truth Internet could more aptly be described as an expression of our fears–our fears of being fooled or being ridiculed or being made fun of for falling for something we believed to be true. I believe there’s a lot of complex emotion wrapped up in our ideas about the Internet and it’s ability to rapidly and unrepentantly spread false information.

This article, by Max Read, explores the web of ideas surrounding the post-truth Internet. Essentially, the core argument of this article seems to be that it’s not just one component of the Internet that is fake–it’s all of them. There are fake people using fake sites made by fake businesses to, ultimately, make real money. According to this article, that’s largely the problem. Read states, “Everything that once seemed definitively and unquestionably real now seems slightly fake; everything that once seemed slightly fake now has the power and presence of the real.” Here, Read is referencing the concept of Inversion. Basically, the Inversion is the tipping point where “real” traffic becomes more suspect online than bot traffic or “unreal” traffic. Computer systems and tracking systems become more apt at tracking bot traffic than traffic on sites committed by real users. It has a strong Matrix texture to it, in some ways. I think Read makes a very compelling case in this article for more attention to be paid to fake news and online tracking around it but I’m not sure I totally buy into everything he’s saying. At least, I don’t necessarily agree with some of his premises.

Mainly, I find it contentious to say that we are anymore fake online than we are IRL. Sure, the Internet provides more opportunities to be fake in some regards but, ultimately, I think it is preposterous to say that we are anymore real outside of the Internet. With how much social, academic, professional, political, cultural, etc. conditioning we have experienced every second of every day, from the moment we are alive, I think it’s inaccurate to say we are real outside the Internet and fake online. Like, I can’t agree with that. I think it’s more nuanced. I think it’s more complicated. (Check out my thoughts on that here.)

Something important that Read does talk about and that I agree with is that only advertisers benefit from the current state of the Internet. Currently, the Internet is good for ads. This is, in large part, due to unregulated data tracking and places like “click farms”. It is far too easy to game the system.

Episode 2” on the documentary series Do Not Track explores easy it is for different entities to track us, cull our data, and place targeted ads. Cookies, which are not regulated in the US Communism is apparently cool so long as it’s for surveillance and everyone gets a cookie,  can attach themselves to our computers and send back fairly comprehensive profiles based upon our data. It’s incredibly too simple.

It seems that so long as perpetuating  and pedaling inaccurate information is profitable, it’s not going to stop anytime soon. Under this system, you and I only have value so long as we can generate revenue. More than that, it doesn’t seem to matter if you are I know what is and is not true because that has no value under this system. As stated in Do Not Track, there is no “No” button for cookies; only an “Okay” button. Even if there were value in demonstrating resistance, there’s no way to do it. Which, to me, seems pretty bleak. Like, the Panopticon doesn’t even care anymore if you know that there’s no one really in the tower. That’s scary.

All this said, I feel like I need to reaffirm my own belief in the power of truth and of speaking truth to power. Though it may not have any monetary value, truth is one of the most worthwhile currencies. Every may pass but the truth will always remain. It is gold. Right now, it may feel like we’re trying to get gold out of mercury, like it’s pointless to try for the truth let alone care about it. But, it’s important now more than ever that we are consistent in our efforts. The truth doesn’t always have to be the loudest voice to be heard; just the most consistent. Power will never hear a truth that isn’t voiced. More, you and I will never believe the truths we don’t reaffirm for ourselves. If anything, that is what the Internet is revealing to us.


Bonus Post


Daily Digital Alchemies

This week, I had some fun and created an alternate persona online named Veronica ^.^ She swears she has no idea where any emails may have gone or where any video tapes are or what the word “collusion” really means…..>.>

Also, I had some fun with pixelating an image of the night sky which I feel represents my feelings towards alchemy: that alchemy is a bright light in an otherwise dark sky. (Same as the truth.)

~Till next time~

Real or Fake? Or…is the Fake Real? Is the Real Real? HELP!

“A world suffused with deepfakes and other artificially generated photographic images won’t be one in which “fake” images are routinely believed to be real, but one in which “real” images are routinely believed to be take-simply because in the wake of the Inversion. Who’ll be able to tell the difference?” -Max Read

This was a busy yet fascinating week! Be prepared; this blog is going to be loaded and packed with all types of golden nuggets of information. The main topics of this blog will be about Tracking, Cookies and Privacy, and YouTube and Inversion. Let’s begin!


I began the checklist by completing Day 3 and Day 4 of the Data Detox. Day 3 required us to do some digging into Facebook to see how well it knew us. I didn’t even know there was a part of Facebook that had “Your Interests.” I discovered that Facebook doesn’t know me at all! In my top interests, there was Motherhood (I have no kids), Tattoos (I have none), Dresses (I hate dresses), Yoga (Can’t even do a split), Valentine’s Day (one of my least favorite “holidays”), and Cosmetics (I don’t wear makeup). I am puzzled and confused; how did Facebook come to term that these were my top interests? I am trying to figure it out, but I have no idea. This activity was enlightening and educational. Doing a deep-cleaning of my account was very refreshing. Day 3 said something that stood out to me: “The Facebook app has permission to access your contacts, location, camera, storage, texts, calls, and more. So if you want to log onto Facebook account on the go on your phone, it’s recommended to use the browser and avoid installing the Facebook app.” I have used the browser for Facebook before, but it always seemed to be an “inconvenience” and “annoying” compared to the app, which is easier to use. Especially when I am using it for business purposes. This activity did make me want to take a detox from the app and start using the browsers more. 

Day 4 was interesting as well. When it comes to trackers online, we all have a browsing fingerprint. I used the online tool that they recommended to use called Panopticlick to see if I was trackable. These were my results:

Screen Shot 2019-02-24 at 1.57.50 PM

I was relieved to know that I do have strong protection against Web tracking!

Cookies and Privacy

Episode 2 of the Do Not Track Documentary gave important information about Cookies and Privacy on the internet. A cookie, as the documentary puts it, is like a text file with a specific ID number for that website or browser to remember you for when you come back to that particular site. Cookie policies always have an “OK” button to click but never an optional “NO” button. What are we saying “okay” to? It’s interesting because everyone does it. A cookies policy information blurb pops up either at the top or bottom of a screen on a website and to quickly move the annoying box away; we click okay so it vanishes. Without reading what the blurb says, we click okay.

YouTube and Inversion

Using Hypothesis for the article How Much of the Internet Is Fake? Turns out, a Lot of It, Actually by Max Read was a great way to do further research about the “real and fake” behind what’s on the internet. Before getting into the section of the article that I picked to further research, there were a few points at the beginning of the article that I found…well…alarming:

  • Less than 60 percent of web traffic is human, a healthy majority of it is a bot.
  • 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people, ” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake. They called this hypothetical event “the Inversion.”
  • The “fakeness” of the post-Inversion internet is less a calculable falsehood and more a particular quality of experience-the uncanny sense that what you encounter online is not “real” but is also undeniably not “fake”, and indeed may be both at once, or in succession, as you turn it over in your head.

The section that I decided to look at was “The Content is Fake,” which talked about YouTube and how it has become difficult to separate what’s real and fake and the dangers that come with it when it comes to children. Read talked about “deepfakes.” “Deepfakes” are, “now-infamous technology that uses artificial-intelligence image processing to replace one face in a video with another. This reminded me of an episode of Family Matters, Season 8, Episode 12. For those who may not know, Family Matters was a sitcom in the 1990s about the Winslow family who was bothered by their neighbor (but loved in real life) Steve Urkel, a clumsy yet scientific genius. In this episode, Steve proved a man was innocent by using his computer to show the court how the real criminal used a computer to swap faces with the innocent man. This was only a little over twenty years ago, and it still applies to today.

Two other articles were apart of this section. The first one was A Style-Based Generator Architecture Adversarial Networks by Tero Karras, Samuli Laine, and Timo Aila. The point or purpose of this article was:

  • Proposing an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. (I have absolutely no idea what that means).

I gave this article a try, but it was challenging to follow and understand. The other article that was part of my section was The New Meme Is Eating Sugar and Telling Lies by Brian Feldman. This article described YouTube as, “a default entertainment source for young children, and a type of automated babysitter for parents.” Bots have taken over what children click on when it comes to videos that seem harmless, but the truth behind it is that none of it is real. The question I had reading this article was that this must be illegal or something, isn’t it? And if it’s not, then maybe it should be! Personally, I think it’s wrong to be that deceitful, especially when it comes down to children.

DDA Time!

DDA266: A Tweet Signal in the Sky

DDA267 Introduce Us to Your Generated Fake ID

Now that we are all caught up see you next week!

Other Blogs!

Diving in the Deep End of Digital Alchemy: Studio Visit

I Actually Really Do Feel like Someone is Always Watching Me…👀

The ‘K’ in Keurig Stands for Kreepy