The concept of Corporate Personhood is grounded in the fact that the owners of a corporation are not financially liable for any obligations the corporation enters into and the corporation is not financially liable for any personal obligations of its owners. It means that a corporation has many of the legal powers that people have- it can enter into contracts, acquire assets, incur obligations, and even enjoy protection under the US Constitution against the seizure of its property. Socially, much of the issue some people have with Corporations being treated as "people" under the law deals with "the Supreme Court's 2010 Citizens United ruling striking down limits on independent corporate spending in elections" as Kent Greenfield writes in his "If Corporations Are People, They Should Act Like It" article for The Atlantic. Ethically, the issue becomes even more complex. If corporations are individuals, then how can the true made-of-organic-matter, not-just-legally-defined individuals who make the company's decisions be held responsible for choices that put corporate profit ahead of the value of human life. Following the Nuremberg trials, the world has agreed that the Nuremberg defense ("an order is an order") is no defense at all. When we consider the justification likely employed by the management and decision makers at companies like IBM during the second world war- money is money- we see that corporate greed created a villain that should be held just as culpable as the soldiers themselves. Make no mistake; there is a distinct difference between a home goods store selling a knife that is ultimately used in a violent crime and a company working with a criminal directly to develop an effective tool that company knows will be used in a malevolent way. In the Mic article "This Is the Hidden Nazi History of IBM - And the Man Who Tried to Expose It" by Jack Smith IV, Smith writes that IBM "didn't just sell tools and products; they collaborated with the Nazis in creative ways to help them design and execute the systematic destruction of the Jewish people." To that extent, I do not believe the question of whether or not corporations should be responsible for immoral or unethical use of their products is worded strongly enough to relevantly express IBM's role. I believe the question at hand is whether or not corporations should be responsible for knowingly selling their products for immoral or unethical uses. It should come as no surprise that my answer to this is yes. When evaluating decisions from a financial management standpoint, tradition calls for a manager to weigh costs against benefits to find the net value. In doing business with Nazi Germany, IBM's German subsidiary knew that the costs included the massive loss of human life and still decided that the monetary benefit was worth proceeding with business. In the New York Times article "IBM's Sales to the Nazis: Assessing the Culpability" by Richard Bernstein, Bernstein presents the counterargument that "Still, it was not clear until at least 1942, even to many Jews, that genocide was not only the Nazis' goal but also a goal they were determined to achieve." As such, Bernstein argues, IBM could not be expected to have known the details of Nazi Germany's nefarious goals. To that point, I believe that even if IBM was not aware of the plans when it began doing business with the Reich, they continued that business through every stage of the Holocaust. Smith agrees, as he writes in the Mic article mentioned earlier that IBM's machines had a part in each stage from identifying and ostracizing the Jews to ghettoizing, deporting, and ultimately killing them. In the New York Times excerpt from IBM and the Holocaust: The Strategic Alliance Between Nazi Germany and America's Most Powerful Corporation by Edwin Black, Black writes that, although IBM communicated with the regime on a daily basis throughout the 12-year Reich, IBM employed a "don't ask, don't tell" policy. They ignored the very atrocities that they enabled- feigning ignorance even when Watson's personal representatives constantly visited Berlin or Geneva "monitoring activities, ensuring that the parent company in New York was not cut out of any of the profits or business opportunities Nazism presented." Should corporations refrain from doing business with immoral or unethical organizations or persons? The US Government seemed to think so, but, even when such business (and even contact) with German Nazis was illegal, IBM hid behind their European offices and subsidiaries to feign credible deniability. Once again, the diction of the question itself, in my opinion, does not adequately convey the consciousness of IBM to the actions of their Nazi clients. Corporations should undoubtedly refrain from doing business with clients who the company knows intend to use its products unethically. "Corporations are not actually living, breathing, physical beings. They cannot go to jail, they cannot lose their lives, and they do not think or feel. Their actions and inactions are the sum total of the actions and inactions of their members." I believe that if corporations are afforded the same rights as individual persons, they should be expected to have the same ethical and moral obligations and responsibilities. I also believe that where they choose to ignore those obligations and responsibilities, the decision makers need to be held responsible. If a parent tells their child to throw a dish on the floor in the middle of a store, that parent is expected to pay for the dish when it breaks. Other people may clean up the pieces and the child may get a strong talking to, but it's the decision maker who must be held accountable for his or her decision. In a similar way, I believe IBM's decision makers who worked with Nazi Germany should have been tried and faced punishment the same way Nazi soldiers did. Following money is just as bad as following orders especially when that greed spells out the loss of human life.
0 Comments
So, I'm looking for those fuzzy heels- you know the ones- the really cute ones that almost look like marabou feathers across the strap? I don't know what to search for, so I'm trying "fuzzy heels", "furry heels", "marabou heels", and- in one attempt fueled by desperation- "those heels that european gold digger bought Carrie at Dolce in Sex and the City Season 1". I don't know how many links I've tried until I find them- beautiful and pink and fluffy and out of my budget. For days, they haunt me. I'm not going crazy, but the advertisement for the exact pair of shoes is everywhere. They're wearing me down; I'm losing will power. One day I break; I click on the link. I'm redirected and I get to the site thinking about how many hours I'll have to work to make up for the shoes, but how they're worth it. I go to add to cart, and they're sold out of my size. If they're going to be creepy and keep tabs on me, can't they keep tabs on my shoe size too? That's just frustrating.
One major concern surrounding targeted online advertising is reducing individuals to data points. It's the inherent lack of respect in undermining a person's perceived* privacy and treating them as a product rather than a person. I say perceived* with an asterisk because, if you want to get technical, this is what those individuals agreed to. That being said, when the terms and conditions are pages long and the options at hand are agree or don't use this service, most people agree without a second thought. If you don't think about the information advertisers and companies have and only think of the convenience of their respective advertisements and services, you might not even mind. If you are in the market for a new blender, for instance, wouldn't it be convenient to alert all major blender retailers? They would each advertise their blender promotions, and you would choose to purchase from the retailer with the best deal. It only gets creepy when those retailers know you want a blender before you even search anything about a blender. Maybe they figured it out because you've been liking those "Tasty" videos, or maybe they noticed you've been searching for smoothie recipes. Honestly, in the case of the blender, I don't care. But, maybe, there are certain search queries I don't want to determine the way I'm categorized and advertised to. I was doing updates on computers office-wide the summer before last, and I felt like I was invading my coworkers' privacy when I saw their advertisements for diapers or watches or Victoria's Secret. Why do I feel like it's an invasion of privacy when I see that data but not when third party services sell my data? It's a game of convenience and mutual benefits. Sure, companies are selling my data and mining it to more accurately sell me their own products, but at least I get advertisements that are actually relevant to me. Thinking about targeted advertisements, they're nothing new. Access to data just allows them to be better- more effective. For now, I consider online advertising to be tolerable. I used to use Adblock because pop-ups and video advertisements are generally more irritating than convenient. With Adblock, you can even continue to enable "Acceptable Ads." This keeps out the "malvertising" ads like "YOU JUST WON AN IPHONE CLAIM NOW!" without getting rid of the advertisements I want to see by stores I like to shop with. Although I don't think it's unethical to use these blockers, the Tom's Guide article brought up a great point- "when companies see revenue going up on the right types of ads, they'll change their practices accordingly." Similarly, when the options are either to scroll past an ad I don't particularly enjoy or pay for content, I will chose the advertisement 99% of the time. Before moving off campus, I had one request. I wanted my dad to install a deadbolt on my door. A break in is one of my biggest fears, so, as you might guess, backdoors and weakened encryption don't necessarily make me feel at ease. That being said, as a worrywart, the other extreme- no surveillance on anyone at all- would also make me a little uneasy. In a perfect world, no one would want to break in or cause harm, but we don't live in a perfect world. We also don't live in a black and white world. It's not always the good guys versus the bad guys. What if someone who is supposed to be a good guy takes advantage of his or her access to data?
I started out this assignment with the mentality that companies should not purposely weaken encryption or implement backdoors in their products for the purposes of government surveillance. We consider big questions like if Apple is ethically responsible for protecting the privacy of their users or ethically responsible for helping to prevent violent or harmful activities that their platforms may enable, but we would be remiss if those are the only two situations we consider. What about a teenage girl scrolling through social media looking for pro-eating disorder pictures. Should Instagram hide posts #proana and #anaismyfriend? Should they redirect her to a help line? Is hiding those hashtag posts censorship? According to a Wired article by Emily Reynolds, "Instagram's pro-anorexia ban made the problem worse" (https://www.wired.co.uk/article/instagram-pro-anorexia-search-terms). Although instagram banned certain search terms, those who wished to continue looking for the content created variants on the terms. Similarly, I believe, if technology companies tried to take a stand against violent or harmful activities in any public way, those wishing to continue to use the platform would find a way around that stand. It seems to me that the only way technology companies could take a stand against those issues would be through back door monitoring. Having now undergone a near complete 180, there are more concerns to discuss. When we question the ethical responsibility of a company for protecting the privacy of their users who are we considering protecting their privacy from? Is it just those with malicious intent? What even constitutes malicious intent? Is it true that what someone doesn't know won't hurt them? And how much of a security risk is posed by a company opening a back door for the government? Who else could and will potentially use the same backdoor to gain access? I don't think worries about Big Brother are simply paranoia, but there are certainly nuances. We've all heard about leaked celebrity pictures and it makes sense that celebrities would be targeted, but what would make an individual target an average person? I still don’t have a clear answer to a lot of those questions. In a perfect world with a perfect government staffed by perfectly objective individuals who only care about national security and a perfect backdoor that only lets those perfect government in, I wouldn’t have a problem with monitoring. I don’t have anything to hide. That being said, as mentioned before, the world is not perfect and seemingly innocuous personal details could be used maliciously in the wrong hands. I don’t have a perfect answer about balance, but I look forward to continuing to reflect on the issue. |
Archives |