Amicism and the Company College Campus: Silicon Valley Problem Pt. 4

This is a continuation of my indefinite series, The Silicon Valley Problem.  Catch yourself up by reading Part One, Part Two, and Part Three, in which I discuss the subordination of creative work, the deliberate ambiguity of algorithm function, and the inability to create an effective boycott.

Silicon Valley leaders aren’t blind to the power they wield. On the contrary, it’s what gives them their messianic complexes. Don’t click those links, there’s no new information in there; what’s important is that I arrived at those links by clicking the first six results from googling “silicon valley messiah.” The point is, silicon valley leaders (continuing readers will recall that I’m using “silicon valley” as the shorthand for break-as-you-go tech companies) have a knowledge of the power they wield, even if they don’t truly understand it. The problem is that their understanding of that power is largely rosy: They have the ability to control the lives of their users and their employees, but they are a fundamental force for good, that good being connectivity, union, free speech, et cetera. The word paternalism comes to mind: They know they’re more powerful than you, infinitely so, really, and so their duty becomes to protect you.

I’d like to propose an alternative term: amicism. It’s similar to paternalism, a branch of it maybe, but amicist companies are different in the way they present themselves and the kinds of things they do for their subordinates. I don’t have the space or knowledge to break down the amicist forces behind all of silicon valley’s activities, so today I’d like to focus just on their work culture.

Considering the Gilded Age
In the 1880s, George Pullman founded Pullman, IL just outside of Chicago for his workers. He provided housing, markets, libraries, and convenient access to work, all for the simple price of having to live there as an employee of the Pullman Company. This would become one of the earliest and largest examples of the company town, an idea that has become entrenched in memory as an example of the Gilded Age and was of course by no means limited to Pullman.

To the best of my knowledge, there is no Facebook, California (though maybe you could make a larger argument that Silicon Valley the location serves as a de facto company town for Silicon Valley the economic force), no company paying its workers in vouchers that can only be used at the company store, no company demanding its workers live on campus. Because the tech giants don’t present themselves in as blatant an oppressively paternalistic way as their Gilded Age predecessors, there’s been a resistance to calling them a kind. Yet just as in my last piece I discussed how Amazon and Google have managed a new form of monopoly that’s resistant to recognition as such, the tech giants have created a form of paternalism that’s more hidden, and acknowledging it means giving up certain things that make working there more palatable for the employee.

Defining Amicism
Where paternalism is the notion that a body with higher authority limits the free will of subordinates under the guise of acting as their protectors, amicism is when a body of higher authority limits the free will of subordinates under the guise of being their friend.

Where corporate paternalism creates the company store and the company town and forces its employees into it, amicism is more gentle. There’s an enormous food court of free lunch at work not because you have to eat there, just because we’re your buddy, let us buy you a bite. You want to go to the gym? Hey, I’ve got a gym, you wanna just come over and use mine? Traffic really sucks, you wanna ride my bus to work? Bring your dog? These things are offered to the employee as perks, and on the surface, they are. They offer convenient access to all things a person needs in their personal life, and generally for free. It’s the least we can do, says the company. Allow us.

On the surface, these seem like great ideas. There’s no pretending that American’s don’t have a work-life-balance problem, and if offering extra-curriculars makes an employee happy, what’s the problem?

Well allow me to propose two: 1) These amenities create an environment where the corporate campus becomes the employee’s whole life, whether they’re working or not, and 2) They allow the company to appear progressive while mistreating peripheral workers.

The Company College Campus
The new company town is free, and like any other community, it’s got everything you need. You don’t need to pay for a gym membership, you don’t need to pay for food or drinks. You can walk outside, through the park, on a bike trail, grab a coffee with friends, play with your dog. You can do everything you’d ever want to do, just short of living there—oh, sorry, you can do that too!

But nobody’s forced to live there; it’s not a municipality. Less than a company town, more of a college campus, and this is by design. College students don’t have work hours, and they don’t have a work-life-balance in the same way other adults do. If they get done with class, they’ll grab dinner, maybe hang out with friends, and then study afterwards. Or maybe they’ll stay up all night studying. Student’s don’t “clock out” the way that non-students do: If there’s work done and they’re nearby to do that work, they’ll go do it. This is especially the case in high-competition, high-demand majors like engineering or computer science.

Supplying all of the “life” side of the work-life dichotomy ensures that employees spend long hours on campus even after they’re “done working.” Especially in the Bay Area and Seattle, nobody wants to sit in traffic for an hour after work just to get to the gym, much better to have a gym on campus. The company is incentivized to to this because it keeps employees local just in case more work needs doing. Read the link above: Employees report being on call 24/7 for a period of weeks, during which time they don’t leave town and seldom leave campus.

Sometimes the company will outwardly discourage this with incentives like unlimited PTO, but again, this is a trap. Companies with unlimited PTO report less time off taken by employees, who aren’t clear on what the appropriate limits to this policy are. Furthermore, in a traditional PTO system, time you don’t take off gets paid back to you, while under an unlimited PTO system, time off not taken is money lost.

The result is that employees go from the culture of their engineering major in college (long work hours, little socializing, but convenient campus amenities), right to the same culture at work.

Friends Without Benefits
One of the saddest New York Times articles I’ve read in years shows two custodians: one a custodian for Kodak in the 1980s, one a custodian at Apple now. While the two got paid roughly the same in inflation-adjusted terms, the custodian at Kodak was a full, benefitted employee of that company, with four weeks of paid vacation, stipends to go to college, and advancement opportunities. The custodian at Apple wasn’t actually an employee of that company, but of a separate company contracted by Apple, and she got none of those benefits.

The truth is that most major companies don’t hire their own custodial or maintenance staff anymore, and instead contract out other companies. This keeps them, as they like to say, “lean and core-focused,” and offers up some kind of shining example of capitalism at its most efficient. But of course, if contractors get mistreated, it doesn’t reflect on the company that hired them in nearly the same way. Google famously got hit last year over its handling of sexual harassment complaints, but only after twenty thousand workers walked out over it.

Conclusion
Amicism looks good on paper. No, if you’re a recent college graduate lucky to have been hired by one of the largest corporations in history, amicism may look good in person too. For those lucky enough to be included in that friendship, you feel indebted to your company in a more personal way, more willing to give up time otherwise devoted to friends and family and return it to the company for free. After all, when the company becomes your friend, time spent there becomes time spent on friendship. This is necessarily exclusionary to the working class that allow a business to function, the result of a classist society in which it’s a rarity that a Stanford-educated programmer is friends with a custodian.

A business is responsible for treating its workers fairly and respectfully, but a friendship is an equality. When you cease to run your business like a business, you abdicate responsibility for treating well workers you don’t see as equal; when you run a business like a friendship, you invite in the classist underpinnings of the society in which you operate. Reinforcing that class system will always benefit the company: It allows them to overlook benefits and fair treatment to its working class, and incentivizes its deskworkers to free labor in the name of gratitude.

The Power of Reasonable Boycott: The Silicon Valley Problem Pt. 3

This is a continuation of my series, The Silicon Valley Problem. In Part One, I discussed the semantics of content and the subordination of creative works; Part Two was about the implicit biases in algorithm design.

A year ago, I was sitting in the delicious and locally-owned Aqus Cafe in Petaluma, enjoying my lunch and a local California export, while composing a blog post that I thought was lambasting Prime Day, a consumer holiday drummed up from the aether to generate profit for Jeff Bezos.  Looking back on that piece, it's remarkably cagey about how I really feel regarding his company, and so it's only fitting that this year, on this most prosperous of holidays, I correct the record.  I concede that Amazon is not a Silicon Valley company formally, but its business model is as close to "move fast and break things" as they come, and the nature of this company perhaps best represents the problem I'm addressing.

In this piece, I attest that Amazon and Google, though they do not fit the traditional definitions of monopoly, create something more or less consistent with that concept by depriving the consumers of their powers under capitalism.

The Insufficiencies of the Old Model
The current definition of a monopoly hinges on proving two distinct components: 1) Control of the market, and 2) Abuse of that control.  This is to say that merely having control of the market is necessary but not sufficient to warrant breaking a company up; one must also prove that the company has used that power to exclude competitors, to raise prices on consumers, and so on.  The exploitative aspect of the monopoly was critical to this definition because to remove it would be to consider every new innovation a dangerous market power, which isn't the case. 

This necessity for actual exploitation does, however, create a gaping loophole in assessing the leveraging position of a possible monopoly.  It dates to a time when the largest companies dealt in infrastructure or natural resources--coal, oil, railroads, et cetera-- and did little else.  Of course, that's not how things work anymore.  Google isn't just a search engine.  Amazon isn't just an online bookstore.  Both of these companies are spread across all spectra of technology, logistics, data-handling, innovation, social media, brick-and-mortar retail, and so on.  Today's megacorporations don't exist because they've exploited a single sphere of the economy to its max, they exist because they've taken large enough portions of all aspects of the economy, and we lack an appropriate definition for this kind of power.

Each market in which Amazon or Google operates can be said to be individually competitive--for every Prime Video, there's a Hulu, for every Google Search, a Bing--but the companies in question have enough of a market share in each sphere that their combined power far exceeds the sum of their parts, enough to deny the public:

The Consumer's Tool(s) under Capitalism
The idealists of capitalism like to say that the consumers have the entirety of the power.  Whether this is true or not in practice (it's not), at least in theory we can say that the failure of a business to provide an adequate product or service for the consumer will constitute an ultimate failure of the business itself, that the will of the people will overpower the will of a company.

For many institutional and systemic reasons, this doesn't exist in practice; nonetheless, under such a hypothetical economy, the consumer would indeed wield all the power, but all that power, theoretically endless, is actually vested in only one tool: the boycott.  If a product is inferior, the consumer moves on to a different producer; if a product is superior but comes at a high price (economic or ethical), the consumer moves on again.  What is a strike if not the consumer (of employment opportunity) boycotting the producer (of employment).

The only weapon the public has in the coliseum of capitalism is the ability to turn its back, and thus in order to prevent the subjugation of the consumer to the whims of the corporation, the power of reasonable boycott must be preserved.

Defining the Power of Reasonable Boycott
I define the power of reasonable boycott as follows: The ability for a person or people to boycott a specific company without a significant and unreasonable disruption of their life.  Far from arbitrary, this definition is implicit in the traditional understanding of the monopoly: When monopolies were centered on natural resources or major infrastructure contracts, they came with an inherent inability to be reasonably boycotted.  One couldn't boycott Standard Oil, for instance, without being unable to drive their car or heat their home. 

How Amazon Threatens this Power
Just like last year, I'm composing this article while sitting at the delicious and local Aqus Cafe in Petaluma.  Nothing has transpired in the last year that has convinced me to take my business elsewhere: Aqus is convenient, locally supportive, and tasty.  However, if that changed, I could exercise my power of reasonable boycott and walk down the street to a different cafe.

I've tried for years to boycott Amazon.  A long time ago I cancelled my Prime membership.  Then I stopped ordering things through their site.  Then I stopped using the Kindle App.  The problem is, I still watch Netflix, file with TurboTax, and use hand lotion.  If I were to truly boycott Amazon, I would also have to boycott the companies that use its cloud computing Amazon Web Services, including (among literally thousands of others):

- Netflix
- Unilever
- Adobe
- Airbnb
- General Electric
- Kellogg's
- BMW
- Autodesk
- Canon
- Comcast
- Hearst Corporation
- HTC
- Intuit
- Johnson & Johnson
- Lyft
- Spotify
- Nordstrom
- Pfizer
- Samsung
- Siemens
- Scholastic
- Slack
- Yelp
- NASA
- Zillow

This doesn't include Whole Foods, now part of Amazon's retail enterprise and whose discounts given to Prime members feel somehow like self-dealing, or the US Post Office, which now delivers 40% of all Amazon packages and profits while doing so.  Oh, and that link is to an article from The Washington Post, which, while not owned by Amazon, is owned by Jeff Bezos, along with many, many other companies.

Boycotting a company this large becomes impractical if not entirely impossible, and would require boycotts of everything from diapers to cereal to newspapers to parts of the federal government.  And we haven't even gotten to Alphabet.

Alphabet, Facebook, and the Superficial Boycott
Okay, this subhead says Alphabet only for the humorous juxtaposition with the sentence just before.  Obviously Alphabet has its fingers in all the pies just like Amazon does, and fulfills all of the criteria above for an impractical or impossible boycott, but I want to specifically talk about Google.

What Google (and to a lesser, though more visible extent, Facebook) brings to the table is an interesting combination of the old and new definitions of monopoly.  A transgression against the consumer's power of reasonable boycott as well as the cornering of one specific, powerful natural resource: data.

If I were to choose to boycott Google today (setting aside how tangled and complicated such a boycott would be), it wouldn't change the fact that I've been a user of the platform since it began collecting my data.  That data is out there to be used, never to be recaptured.  This creates an unreasonable boycott simply due to the fact that it's a superficial one--I've ostensibly stopped using Google, but my patronage at any point in Google's history means that the data I generated can be sold and re-sold long after I've left the platform. 

Facebook now allows you to delete the data it has on you, but this is both a relatively new function (as of 2018), and is limited to Facebook itself, and not the data you've given third party apps that use the platform, regardless of whether that third party data was used legally or otherwise.  Google allows you a similar option, but in both cases the process is limited and complicated, and one should not forget that it took a foreign power meddling in a presidential election to bring about in the first place.

Conclusion
I pick mostly on Amazon today because it's Prime Day, and thus a day prime for pointing out that the power megacorporations have over our lives is truly unprecedented.  Traditional notions of monopoly no longer apply, as each of these corporations faces competition, however minuscule, in each of the areas in which they operate, and yet because their fingers are in so many pies, the fundamental power to affect capitalism shifts inexorably away from the consumer.  Without the protection of the one tool we have, the power of reasonable boycott, consumers have no recourse against the companies that, while they may not display traditional exploitative behaviors, are nonetheless in a position to act with relative impunity, knowing that a boycott of the most obvious of their products will still yield income in others, and that a complete and total boycott is impossible and highly disruptive.

I won't stop anyone from using Amazon or the companies that host with them (hell, I'm listening to Spotify right now), but I will urge caution.  Use Amazon conscientiously, not merely as a way to get the hottest no-Black-Friday-but-totally-Black-Friday deals. 

On the Eighth Day, God Created the Algorithm: Silicon Valley Problem Pt. 2

This is a continuation of my indefinite series, The Silicon Valley Problem.  Part One, regarding the semantics of content and the subordination of creative endeavors, can be found here.

With Facebook once again back in the news for its practices during the 2016 election, I can't think of a better time to discuss one of the biggest (yet most hidden) insidious gifts of Silicon Valley: The complex algorithm.  Before I begin in earnest, I should state that I first started thinking about this problem after listening to the terrific 99% Invisible episode on the subject last fall, and draws further on Cathy O'Neill's ideas in her book WEAPONS OF MATH DESTRUCTION.  If you are unfamiliar with either or both, please follow those links; they're terrific.

The Case Against Luddism: A Disclaimer
I'm not a luddite, I promise.  Yes, I may have a wristwatch made of wood, a Royal typewriter, and a Mac, but I really don't hate technology.  For all of Silicon Valley's problems, the advancement of computer science is not one of them; this blog should not be interpreted to say that the existence of complex predictive and analytical algorithms is itself problematic.  There's an argument to be made there, but it's not here.

That being said, I am not a computer scientist (you are reading this on a Squarespace site, after all).  I've tried to make sure the information that follows is correct, but if I've made any major misinterpretations, even argument-defeating ones, I welcome corrections.

The Un-Understandable
Quite a lot has been written on how the designers of machine-learning algorithms don't understand how their ultimate products function, or if they do, the information is highly proprietary.  But I'd like to draw your attention specifically to this article from the MIT Technology Review: "The Dark Secret at the Heart of AI".  The article itself is terrifying as most of these things are, about how programs (in this case self-driving cars) are making decisions that, to the outside, seem logical, but whose thought processes that led to those decisions are impossible to pin down.  This is a huge problem when algorithms determine everything from what videos you see on YouTube to what patients are most appropriate for a clinical trial of life-saving drugs, but allow me to draw your attention to the following in particular:

"'If you had a very small neural network, you might be able to understand it,' Jaakkola says. 'But once it becomes very large, you have thousands of units per layer and perhaps hundreds of layers, then it becomes quite un-understandable.'"

It's not that the algorithms aren't understood, it's that they can't be understood.  The complexity of thought required is beyond the scope of the human, something quite new to our society.  This is a source of danger, and one that seems to be taken lightly in the industry.  If I were to hand you a bomb, for instance, and say "please don't drop this, it has the capacity to blow up the whole room and both of us with it," it's a safe bet you would handle it very delicately whether it was armed or not. 

I'm sure extensive testing and thought is given to how these algorithms operate within society, they are nonetheless both universal and proprietary, and in that combination lies the danger.  We drive around with tanks of combustible gasoline under our seats, but we don't light cigarettes at gas stations because we know and understand the risks, and can take steps to reduce the possibility of failure.  If taking proper precautions is still too great a risk, the consumer can elect not to use a car.  The possibility of informed acceptance of risk is assumed.

On the other hand, the un-understandibility of neural networks and machine-learning algorithms means that we can never fully know how to reduce risk, and when the algorithm is understood, the information is held close to the vest as valuable IP.  Thus, informed consent in the use of these products becomes impossible.  Millions of humans are subjected to algorithms that are poorly understood and sometimes highly risky.

The Unbiased Robot Fallacy
Okay, so this was inevitable, though.  We've now reached a point where we can design programs to do things that human thought alone is incapable of doing.  I would say yes, that's true, and that is a triumph of human ingenuity.  The problem is that we didn't create the perfect algorithm, not in YouTube's instance, not in Facebook's instance, not in Bank of America's or NASA's or Reddit's or a self-driving car's instance.  The trope of the unbiased, cold, calculating machine is a fallacy for the simple fact that, at the end of the day, even if the algorithm was tailored and reiterated by machines, it was still created by a human, and thus subject to human biases.  I don't want to re-tread ground already well covered by the links in the introduction, so I'll refer you back to 99% Invisible and WEAPONS OF MATH DESTRUCTION regarding algorithms that perpetuate racism and sexism--it's a fascinating consideration of the fallacy.

If this were an Intro to Philosophy course, I'd bring up the trolly problem and how self-driving cars have to make ethical calculations as to whether to endanger the driver or the bystander, but we don't have to get that dramatic for the effects to be seen on a grander scale.  For instance, Facebook a long time ago decided that it was better for its algorithm to favor clicks and engagement regardless of the content provided that it's not pornography or hate speech (which is to say an additional value judgment that we don't have enough space to get into now) than to favor the propagation of legitimate information.  This was a business call on their part, and there may or may not be a readily available explanation for why a certain article did or did not propagate according to the algorithm, but that algorithm nonetheless was programmed to place certain values on aspects of the content as decided by its programmers.

To quote the aforementioned 99% Invisible episode:

"Every algorithm reflects the choices of its human designer. O’Neil has a metaphor to help explain how this works. She gives the example of cooking dinner for her family. The ingredients in her kitchen are the 'data' she has to work with, 'but to be completely honest I curate that data because I don’t really use [certain ingredients] … therefore imposing my agenda on this algorithm. And then I’m also defining success, right? I’m in charge of success. I define success to be if my kids eat vegetables at that meal …. My eight year old would define success to be like whether he got to eat Nutella.'"

The Burden of Design
Again, much of this is necessary and good in a vacuum.  It was inescapable that we would one day program something more complex than ourselves just as we built a lever that could lift more than our arms.  The problem is that the public trust, bordering on blind trust, in the algorithm removes the burden of design from the algorithm's creators.  It's what allows Facebook to claim that it's merely a platform, and that it's not its place to determine what content its users see and what they don't, or that, as 99% Invisible used as its example, United Airlines isn't to blame for who it selected to remove from that fateful, violent flight.

The difference seems to be that because an algorithm emulates thought, it can be seen as having made a decision separate and apart from the decision of the human that designed it.  This may be somewhat true, though we'd have to litigate the definitions of "decision" and "thought," but to linger on that is to get lost in the weeds.  Until we determine that programs have personhood, they must remain in the eyes of responsibility as no more than complicated Rube Goldberg machines--we may not see all their working parts, but they act according to a set of operations that lack higher-order thought, and thus cannot bear responsibility for their actions.  Only an architect may know how her building stays standing, but when it collapses she is still responsible.

In the meantime, operations of the algorithms that define our lives need to be made known to the public, and monopolistic practices by the companies that create them need to be curtailed.  To do nothing is to deny the informed assessment of risk to the public, and force us to subject ourselves to potentially destructive sorting with know knowledge or alternative option.  If an algorithm cannot be understood in its entire by the company that created it, that company must be held responsible when it goes wrong.

Discontent over Content: The Silicon Valley Problem Pt. 1

Even when I was starting out as a freelancer, where any sane person wouldn't dare turn down a client, I had one rule: I don't write content.  I've written website copy, promo copy, manuals, handbooks, product descriptions, blog posts, but I have never accepted a client that pitched their project using the word content.  The reason is this: Even if the thing I end up writing could be described as such, someone who pitches content to me is demonstrating a diminution of the importance of creative work.  This reason may seem mostly (or entirely) semantic, but semantics is important, and understanding the impact a subtle linguistic change can make is crucial to understanding the larger cultural shifts of the 21st century. 

This is the first part of a series of indefinite length and undetermined frequency about the larger impact of Silicon Valley culture on the arts, language, writing, and the capitalistic compact.

The Semantics of Content
I was looking for consulting work the other day when I came across an ad on the website AngelList, which is (to use the stereotypical Silicon Valley "but for" descriptor) Monster.com but for startups.  For your profile, they ask you to select your current position from a dropdown box, and understanding that there is no option for other, here are your choices:

Screen Shot 2018-02-12 at 10.00.31 AM.png
Screen Shot 2018-02-12 at 10.00.43 AM.png

 

Obviously something like writer or editor was probably not going to be there, but I was hoping for at least communications or editorial manager, or even something tangentially related to the act of creatively smashing words together other than Creative Director.  But of course, the closest thing that I could find to what I did was probably Content Creator.  But I'm not a content creator, I'm a writer.  I don't create content, I write.

Okay, so what's the problem with content?  I'm italicizing the word because it's the word I'm referring to, not the concept.  Of course websites need to be filled, blogs need to say things, marketing campaigns need to have messages, and of course those editorial pieces are the contents of that framework.  I have no problem with that.  The problem arises when those things are gathered under the blanket of content, rather than what they actually are: Creative works, pieces, articles, et cetera.  Shoving those things under the umbrella of content dehumanizes the act of creation, mechanizes it.  It implies that these things are not especially saying anything, and are just filling space on the page.  It ultimately diminishes their role in (here's the groan before saying it) value addition.  It places the creative below the engineer or the entrepreneur under the assumption that what is really the creation, what is really the hard work, is the construction of the vehicle, and not what goes inside of it.  The website, the platform, the marketing campaign--that's the hard work, and you, the creative, you just churn out the contents of it.

While it may be the case that a person first visits a certain website or uses a certain platform because of its utility and design, and not for its writing or video production, the use of the word content nonetheless subordinates the writer, the designer, the producer, to the engineer, devalues the production of creative work to something mechanical, algorithmic (more on that phrasing in a later post), something that, ultimately, anybody could do, and you're lucky that we're hiring you to do it (but it's only because we're too busy to deign to do it ourselves).

SEO and the Commodification of Words
This is not the commodifications of Words writ large, but individual buzzwords.  Now, it's nearly impossible to get a content writing job unless you're familiar with and practiced at SEO.  SEO is the holy grail of freelancers who do most of their work online because if they're not careful, it's all that they'll be hired to write.  It has a more subdued role in traditional journalism, but when that journalism is online, it still plays a role (I can't remember what comedian said, "If there's a question mark at the end, it isn't news," but boy would they be disappointed). 

This takes the creativity out of creative work, not because the people hired to write it are less creative (creators are not considered such by their success at being creative, but by their intent), but because SEO actively discourages creativity.  Creative syntax, creative semantics, creative lexicology, these things don't show up in search engines precisely because they are creative--if everyone could search for them they wouldn't be particularly novel.

This doesn't create a huge problem for established writers, even established freelancers.  The woman writing for the Atlantic or the guy writing for the Washington Post will still get those jobs and can still be creative at those jobs, but for the up and comer, for the guy (cough cough) who moonlights freelancing to fund other creative writing endeavors, there can be no recognition of creativity, because such creativity is discouraged in smaller markets who won't take chances.  This is making it harder and harder for writers to establish themselves because they're forced to write homogenous, engineered language.

The Wider Impact of Content
The word content has spread so far and wide that it's now the first listing on Wikipedia when you search the word.  Now not only are websites and platforms using the term to describe their words, but blogs, online periodicals, places that you go to first for the words they write are now calling those words content.  The subordination of creative work has expanded into places nobody would visit if they weren't interested in the creative works those companies produce.  The pervasiveness of this semantic change has devalued writers, editors, designers, artists, all over the country to the point where most companies don't bother to hire in house editorial managers anymore (who, aside from the obvious experience and linguistic care they bring to their work, would do things like establish a consistent style guide) and instead either go with freelancers or forgo the professional altogether.  It would overplay my hand to say that this has caused the deemphasis of lexical mindfulness in conversation and led to a general semantic disregard, but it may be a contributing factor thereto.

But I don't think it overplays my hand to say that Silicon Valley culture devalues the creative, and that because of that culture's pervasiveness in society, creative endeavors are overall subordinated to STEM and business endeavors.  This isn't because of the word content, but the use of that word both represents and helps cultivate that mindset.

Uncomfortable Truths for Prime Day

Ahhh Prime Day.  The day when your Facebook and Twitter feeds are flooded with free Amazon advertising by excited friends grabbing great deals, and the day when the rest of us grumble silently to ourselves and blog.

I can't necessarily say that Amazon has been a net negative in the world--they're certainly convenient, which counts for a lot, and they've increased access to literature to a lot of Americans who simply weren't reading before but now have found the time or energy to pick up an audiobook or ebook, and they generally treat their customers pretty well.  And I also can't necessarily say they've been a net negative for the publishing industry, with the marketplace creating a very wide and accessible distribution platform for publishers and the surge of ebooks giving writers and houses alike some fat margins and royalties--at least so far.

We should always be wary, though, when one company has this control over the market.  Amazon does occasionally get angry, and then they do, it results in things like pulling Macmillan titles over price disputes, holding POD publishers hostage to make deals with BookSurge, or breaching contracts to force authors to give their books away for free.  This is to say nothing of the not-so-subtle placement of their New York brick-and-mortar store or their much-publicized feud with Hachette.

Maybe it's just the liberal sensibility in me that says to beware of Amazon, but just like Barnes & Noble did to indie bookstores all over the country, so too does Amazon have the power to do with indie publishing houses, who derive a significant portion of their sales through its marketplace.  Right now it's not in Amazon's best interests to get rid of these publishers, but we should never depend on the kindness or the aligned incentives of Amazon to keep the infinitely smaller indie publishers safe.

What I'm trying to say is on Prime Day, sure, go watch some TV, or go order yourself a new clothes hamper.  But if you want to buy a book, support your local independent bookstore.  Don't give Amazon all the power; make sure indie publishers have the ability to sell elsewhere.

Insidious Happiness

I haven't blogged in half a year.

I'd like to say that's because I've been overwhelmed with the vast amount of writing and publishing and editing I've been doing, but that would be somewhat untruthful.  I have been writing, I have been editing, but I'd have had more than enough time to scribble some words if it weren't for the biggest danger to the early-career creative: A satisfying day job.

When I first decided to forgo the world of stability and employer-provided health insurance and become self-employed as a writer, I wanted a day job that was inherently unfulfilling.  I wanted to work a cash register or flip a burger or two, something that would provide a paycheck, but would leave me feeling unproductive at the end of the day unless I'd also written.  That's certainly not to disparage those jobs--I've worked them before and hopefully will again--but while I may feel confident that I'd put in a hard day's work, I wouldn't feel especially satisfied.  This would keep me hungry, make sure I didn't slip into any kind of career complacency.

What I got instead is a job in the only other field I'm remotely qualified in: rowing.  And the problem is that I love it.

So while yes, I do sometimes wish I didn't commute to Petaluma 7 days a week (For someone who labels himself as "working from home," I sure do have a hell of a commute), and it hasn't halted my productivity in writing and freelancing, the job there is inherently rewarding.  If I put in a hard day's work at the boathouse, I can see real, tangible results, and I can be proud of that.  It's easy to be happy after a good season, and while that theoretically eases the pressure of having to write my way to self-actualization, it also creates this gnawing voice at the back of my mind that says "hey dude, you're not all that hungry right now."  And it's right.  Last fall, I could put in 6-7 hours at the keyboard before heading up to my day job, but by the end of the spring, I would be putting in only 3-4.  It's not nothing, but it has extended my production schedule beyond what I'd hoped.  As the spring season picked up, I found my extra hours of procrastination weren't filled with reading or following the publishing industry, but were watching rowing videos and checking race results.  It's a feeling I'm well acquainted with: It lowered my GPA every other semester in college.  At the crescendo I was traveling every weekend, spending the entire day at the boathouse, and work time not coaching was spent freelancing, which means I was writing very little.  The front of my brain was genuinely satisfied, but the back of my brain was shaking its head.  

This cognitive dissonance clearly created some kind of guilt that I didn't realize until the season finally ended and I put in my first full 10 hour day at the keyboard in months, and I felt incredible.

So now it's the summer.  Now the rowing season is over.  Now the manuscript is finished (2 months late, but finished) and the queries are floating in the aether waiting to turn into requests for fulls (ha!).  Now I can finally become dissatisfied enough in my daily productivity to put full days back into my actual career.

Take that, insidious happiness!

What the Side Hustle Means for Millennials

What an interesting phenomenon that's not quite modern and not quite classic.  Your friend who cuts hair or takes grad photos does so with a little more seriousness than a hobby, but not quite the institutional legitimacy of a job.  Something like the young adult form of babysitting, maybe.

I say "not quite classic" because it's certainly a phenomenon that's on the rise; in previous generations of course holding part-time jobs outside of school or outside of your primary profession was by no means unusual, but in the past it hasn't been conducted on an informal basis, under the table, amongst friends.  Nowadays the side hustle has become a kind of friendly black market communal economy, with colleagues helping each other out for affordable services like (to reiterate the two biggest ones I've encountered) cutting hair, taking professional photos, and yes, editing (I'm lucky enough to have turned my former side hustle into a career, but that's the exception to the rule).  A quid pro quo, with Venmo operating as the central bank.  For some it fulfills only the necessity to have a little spending cash, but for others it's a necessary practice in order to end each month in the black.

But I say "not quite modern" as well, because we have seen this before if we go back far enough.  If you return to a time when a substantial portion of the population still practiced agriculture (not necessary pre-industrial, but we'll say pre-suburb), this was the law of the land for those outside the city.  If you needed a well dug or a home improvement made, if you needed something smithed, et cetera, you sought out someone nearby who had the basic skills and equipment to do this even if their trade was something irrelevant.  For these services you paid either in cash or in kind, and of course never taxes.  It was an economy that existed in the absence of affordable professionals for these services.  Just as today not everyone can pay a professional editor but everyone needs something edited, so too back then could not everyone afford to pay a smith to make nails, but everyone needed nails.

So what does this say about the Millennials, who've revived this not-quite-classic economy?  Well, for one, I'd say it rejects the notion of entitlement.  These side hustles often stem from things the practitioners would do as a career if it were lucrative enough, often were the result of "chasing your dreams" before deciding to "study something practical."  Entitlement would suggest that the inability to make a career out of the side hustle would cause complaints, but instead the existence of a career, usually in a different field entirely and one closer to what the practitioner studied or studies, shows the willingness to work and, as critics are wont to say, "be practical."

Second, it demonstrates a divide between those who can afford to pay for professional services and those who can't.  The beauty of capitalism of course being that no money can be made off of things people don't need or want, and therefore the fact that there is a black market for affordable haircuts and grad photos means that the practitioners have seen an opportunity and exploited it.  By that same token, the fact that the practitioners need that side hustle for extra spending cash, or that they can't make a profession out of the side hustle, shows that wages are too low for a career alone.  

If ever there was a definitive measure of income inequality, the rise of the side-hustle was it:  It existed in the pre-industrial and premodern, it existed during the great depression, it went away during the prosperity of the second half of the 20th century, and it's returned now.

But that's not to say any practitioner or consumer of the side hustle should be condemned.  In fact, the opposite.  Those who practice the side hustle and those who consume it are those who clearly need it, so without sounding too self-serving, do the charitable thing and support your local side hustle today.

The New Divide in Literature

As writers, we write to appeal.  It's never explicit, and we shouldn't tailor our writing based on appeal, but by definition a prominent writer is prominent because he or she is appealing to a large enough audience.  The old divide this created is obvious: That between the high-culture "literary" forms and the mass-market "genre fiction" (but man do I hate those terms).

However, being through the looking glass as we now are, my prediction is that we're headed for a new kind of divide, this time along political lines.  Certainly this divide has always existed--I doubt Al Franken was too popular with conservatives even before his time in office--but in the past both parties have always found common ground in some form of another, whether it was in literature, genre fiction, or newspaper.  Even to say nothing of the proliferation of fake news and declining relevance of facts in political discourse, it's clear that written media is now more divided than it's ever been, with previously radical left- and right-wing publications such as Jacobin and of course Breitbart being elevated to the status of the more entrenched cornerstones of the industry, and with some of those cornerstones now taking more open and aggressive political stances, like we've seen with The Atlantic.  Of course I don't need to mention here the new political power of Steve Bannon and what that means for politics moving forward.

So whereas before a liberal might consent to read a good piece of reporting by a right-leaning publication and vice versa, we've now reached a point of peak factionalism wherein to support one faction is not only to view the other as lower in quality, but to outright disregard them as a source of any credibility at all.

This is going to become a problem for the major publishers, who own imprints that fall on both sides of that divide.  Just as The New York Times was faced with the decision of whether or not to use the word "lie," so too will Simon & Schuster have to decide what to do with Threshold after signing Milo Yiannopoulos, which led to a revolt both from bookstores as well as from other S&S authors.  All the major houses own conservative imprints, and while not all of these imprints are signing authors as controversial as Milo Yiannopoulos, the widespread alienation of readers remains on the line, and these companies are going to have to choose a side.

It remains to be seen how the major players will play this game, or whether it will lead to a rise in indie presses, but I would not be enormously surprised to see at least a partial reorganizing of the literary scene as a result.