Gadgets. Tech. Data Warehouse. Business Intelligence. NFL. College Football.
893 stories
·
51 followers

Big Data Processing at Spotify: The Road to Scio (Part 1)

1 Share

scio-logo

This is the first part of a 2 part blog series. In this series we will talk about Scio, a Scala API for Apache Beam and Google Cloud Dataflow, and how we built the majority of our new data pipelines on Google Cloud with Scio.

Scio
> Ecclesiastical Latin IPA: /ˈʃi.o/, [ˈʃiː.o], [ˈʃi.i̯o]
> Verb: I can, know, understand, have knowledge.

Introduction

Over the past couple of years, Spotify has been migrating our infrastructure from on premise to Google Cloud. One key consideration was Google’s unique offerings of high quality big data products, including Dataflow, BigQuery, Bigtable, Pub/Sub and many more.

Google released Cloud Dataflow in early 2015 (VLDB paper), as a cloud product based on FlumeJava and MillWheel, two Google internal systems for batch and streaming data processing. Dataflow introduced a unified model to batch and streaming that consolidates ideas from these previous systems, and the Google later donated the model and SDK code to the Apache Software Foundation as Apache Beam. With Beam, an end user can build a pipeline using one of the SDKs (currently Java and Python), which gets executed by a runner for one of the supported distributed systems, including Apache Apex, Apache Flink, Apache Spark and Google Cloud Dataflow.

Scio is a high level Scala API for the Beam Java SDK created by Spotify to run both batch and streaming pipelines at scale. We run Scio mainly on the Google Cloud Dataflow runner, a fully managed service, and process data stored in various systems including most Google Cloud products, HDFS, Cassandra, Elasticsearch, PostgreSQL and more. We announced Scio at GCPNEXT16 last March and it’s been gaining traction ever since. It is now the preferred data processing framework within Spotify and has gained many external users and open source contributors.

In this first post we will take a look at the history of big data at Spotify, the Beam unified batch and streaming model, and how Scio + Beam + Dataflow compares to the other tools we’ve been using. In the second post we will look at the basics of Scio, its unique features, and some concrete use cases at Spotify.

Big Data at Spotify

At Spotify we process a lot of data for various reasons, including business reporting, music recommendation, ad serving and artist insights. We serve billions of streams in 61 different markets and add thousands of new tracks to our catalogue every day. To handle this massive inflow of data, we have a ~2500 node on-premise Apache Hadoop cluster, one of the largest deployments in Europe, that runs more than 20K jobs a day.

Spotify started as a Python shop. We created Luigi for both job orchestration and Python MapReduce jobs via Hadoop streaming. As we matured in data processing, we began to use a lot of Scalding for batch processing. Scalding is a Scala API from Twitter that runs on Cascading, a high-level Java library for Hadoop MapReduce. It allows us to write concise pipelines with significant performance improvement over Python. The type-safe functional paradigm also boosts our confidence in code quality and correctness. Discover Weekly, one of our very popular features, is powered by Scalding (BDS2015 talk). We also use Apache Spark for some machine learning applications, leveraging its in-memory caching capability for iterative algorithms.

On the streaming side we’ve been using Apache Storm for a few years now to power real time use cases like new user recommendation, ads targeting and product metrics. Most pipelines are fairly simple, consuming events from Apache Kafka, performing simple filtering, aggregation, metadata lookups, and saving output to Apache Cassandra or Elasticsearch. The Storm API is fairly low level which limited its application for complex pipelines. We’ve since moved from Kafka to Google Cloud PubSub for ease of operations and scaling.

Apart from batch and streaming data processing, we also do a lot of ad-hoc analysis using Hive. Hive allows business analysts and product managers to analyze huge amounts of data easily with SQL-like queries. However Hive queries are translated into MapReduce jobs which incur a lot of IO overhead. On top of that we store most of our data in row-oriented Avro files which means any query, regardless of actual columns selected, requires a full scan of all input files. We migrated some core datasets to Apache Parquet, a columnar storage format based on Google’s Dremel paper. We’ve seen many processing jobs gaining 5-10x speed up when reading from Parquet. However support in both Hive and Scalding has some rough edges and limited its adoption. We’ve since moved to Google BigQuery for most ad-hoc query use cases and have experienced dramatic improvements in productivity. BigQuery integration in Scio is also one of its most popular features which we’ll cover in the second part.

Beam Model

Apache Beam is a new top level Apache project for unified batch and streaming data processing. It was known as Google Cloud Dataflow before Google donated the model and SDK code to the Apache Software Foundation. Before Beam the world of large scale data processing was divided into two approaches: batch and streaming. Batch systems, e.g. Hadoop map/reduce, Hive, treat data as immutable, discrete chunks, e.g. hourly or daily buckets, and process them as a single unit. Streaming systems, e.g. Storm, Samza, process continuous streams of events as soon as possible. There is prior work on unifying the two, like the Lambda and Kappa architectures, but none which address the different mechanics and semantics in batch and streaming systems.

Beam implements a new unified programming model for batch and streaming introduced in the Dataflow paper. In this model, batch is treated as a special case of streaming. Each element in the system has an implicit timestamp and window assignment. In streaming mode, the system consumes from unbounded (infinite and continuous) sources. Events are assigned timestamps at creation (event time) and windowed, e.g. fixed or sliding window. In traditional batch mode, elements are consumed from bounded (finite and discrete) sources and assigned to the same global window. Timestamps usually reflect the data being processed, i.e. hourly or daily bucket.

beam-model
Beam Model

This model also abstracts parallel data processing as two primitive operations, parallel do (ParDo) and group by key (GBK). ParDo, as the name suggests, processes elements independently in parallel. It is the primitive behind map, flatMap, filter, etc. and behaves the same in either batch or streaming mode. GBK shuffles key-value pairs on a per window basis to collocate the same keys on the same workers and powers groupBy, join, cogroup, etc. In the streaming model, grouping happens as soon as elements in a window are ready for processing. In batch mode with single global window, all pairs are shuffled in the same step.

With this simple yet powerful abstraction, one can write truly unified batch and streaming pipelines in the same API. We can develop against sampled log files, parsing timestamps and assigning windows to log lines in batch mode, and later run the same pipeline in streaming mode using Pub/Sub input with little code change. Checkout Beam’s mobile gaming examples for a complete set of batch + streaming pipeline use cases.

Enter Scio

We built Scio as a Scala API for Apache Beam’s Java SDK and took heavy inspiration from Scalding and Spark. Scala is the preferred programming language for data processing at Spotify for three reasons:

  • Good balance between productivity and performance. Pipeline code written in Scala are usually 20% the size of their Java counterparts while offering comparable performance and big improvement over Python.
  • Access to large ecosystem of both infrastructure libraries in Java e.g. Hadoop, Beam, Avro, Parquet and high level numerical processing libraries in Scala like Algebird and Breeze.
  • Functional and type-safe code is easy to reason about, test and refactor. These are important factors in pipeline correctness and developer confidence.

In our experience, Scalding or Spark developers can usually pick up Scio without any training while those from Python or R background usually become productive within a few weeks and many enjoy the confidence boost from functional and type-safe programming.

So apart from the similar API, how does Scio compare to Scalding and Spark? Here are some observations from different perspectives.

Programming model

  • Spark supports both batch and streaming, but in separate APIs. Spark supports in-memory caching and dynamic execution driven by the master node. These features make it great for iterative machine learning algorithms. On the other hand it’s also hard to tune at our scale.
  • Scalding supports batch only and there’s no in-memory caching or iterative algorithm support. Summingbird is another Twitter project that supports batch + streaming using Scalding + Storm. But this also means operating two complex systems.
  • Scio supports both batch and streaming in the same API. There’s no in-memory caching or iterative algorithm support like Spark but since we don’t use Scio mainly for ML it has not been a problem.

Operational modes

  • With Spark, Scalding, Storm, etc. you generally need to operate the infrastructure and manage resources yourself, and at Spotify’s scale this usually means a full team. Deploying and running code often requires both knowledge of the programming model and the infrastructure you’re running it on. While there are services like Google Cloud DataProc and similar Hadoop-as-a-Service products, they still require some administrative know-how to run in a scalable and cost-effective manner. Spydra and Netflix’s Genie are some examples of additional tooling for such operation.
  • Scio on Google Cloud Dataflow is fully managed, which means there is no operational overhead of setting up, tear down or maintaining a cluster. Dataflow supports auto-scaling and dynamic work rebalancing which makes the jobs more elastic in terms of resource utilization. A data engineer can write code and deploy from laptop to the cloud at scale without any operational experience.

Google Cloud Integration

  • While there are Hadoop connectors for GCS, BigQuery, plus native clients for several other products, the integration of these with Scalding and Spark is nowhere near as seamless as that of Dataflow.
  • This is where Dataflow shines. Being a Google project, it comes with connectors for most Google Cloud big data projects, including Cloud Storage, Pub/Sub, BigQuery, Bigtable, Datastore, and Spanner. One can easily build pipelines that leverage these products.

By moving to Scio, we are simplifying our inventory of libraries and systems to maintain. Instead of managing Hadoop, Scalding, Spark and Storm, we can now run the majority of our workloads with a single system, with little operational overhead. Replacing other components of our big data ecosystem with managed services, e.g. Kafka with Pub/Sub, Hive with BigQuery, Cassandra with Bigtable, further reduces our development and support cost.

This concludes the first part of this blog series. In the next part we’ll take a closer look at Scio and its use cases. Stay tuned.







Read the whole story
TheRomit
39 days ago
reply
santa clara, CA
Share this story
Delete

How To Transform A VC Fund Capital Base From Individuals To Institutional LPs

1 Share

Lots of people have raised small VC funds. There are more startups, and there’s more LP capital from various sources to meet that demand. After the rise of institutional seed funds in the late 2000’s, we’ve obviously witnessed a pure explosion of new (mostly seed) fund formation, with seemingly no end in sight. Many of those 400+ microVC funds (sub $100M) are also trying not only get bigger, but also trying to convert their LP capital base from mostly individuals to mostly traditionally institutional capital. It’s a heavy lift to make that conversion. While I would never give advice on how to do this, as I’m still learning and each individual case is very different (each fund manager is his/her own special snowflake!), I can share a bit about how I prepared myself for it, how I launched my campaign, and how it all closed. My hope is that this post serves as a helpful guidepost for other managers and that it can save someone time in the future. Much of this below benefits from me looking back with hindsight… there’s lots in here that I wish I had known just a year ago!

At a high level, I break this process into three distinct phases: (1) Pre-Marketing Preparations; (2) The Actual Campaign; and (3) The Closing Mechanics.

Phase 1: Pre-Marketing Preparations

Timing: I built my LP list and began putting the word out two months before hitting the market. Now looking back, i wish I pre-marketed a full six months in advance. Institutional LPs need lots of time to meet folks, to digest initial meetings, to socialize things with their network, and to fill their pipeline. I could even argue that six months is too short. During these meetings, it’s nice to socialize your plans, target size, and strategy. LPs will offer great feedback which can be refined and folded into your final official campaign.

Materials: i prepared a master slide deck (with the help of a designer that I paid REAL money to), which I sliced into a very short “email deck” and a slightly longer “presentation deck.” In retrospect, I only needed the shorter deck (to get the meeting, and for LPs to circulate to their colleagues and peers). I sent everything via PDF, with no exception. What mattered most in the materials was explaining the manager’s background & differentiation (what makes you stand out?, the strategy (where to invest the fund), the portfolio construction (how to invest the fund), deal sourcing (where do you get your leads from). I’ll go into more details on this in future posts.

Key Service Providers: I signed up with very well-known and vetted legal counsel in fund formation, fund administration (back office), and banking. For me, finally being able to go with the top class players forced me to play a better game.

LPA Documents: I asked legal counsel to make my fund docs “plain vanilla,” very simple and in-line with the market. I am not a proven manager. I was shocked to hear how many people play games with the LPA when they haven’t returned real capital.

Data Room: I built a data room on Box with a paid subscription. The folders in my Box data room covered the following topics: References (co-investors, follow-on VCs, previous LPs, and portfolio CEOs); Press Mentions; Notable Blog Posts; Raw Investment Data; Service Providers; Marketing Materials; Previous LP Reports; Audit Paperwork; Official Fund Documentation. The file formats in my Box data room were only PDF, .xls, .jpg, and .png.

Lead Qualification & Tools: The pre-marketing campaign is a good time to find out who isn’t a fit. I had a friend who is an LP just tell me even before seeing the deck or anything that it wouldn’t be a fit. He actually helped me a ton in my process. Similarly, I was able to get into the pipeline of other LPs as I hit the market. I managed my contacts and flow through Pipedrive.

Skin In The Game: LPs will look for any new manager to commit a raw dollar amount commensurate with their liquid net worth. To be safe, I would suspect 2% GP commit would be table stakes, and many folks have to finance that. For folks who have the nest egg, wise to expect to put the appropriate skin in the game.

Phase 2: The Actual Campaign

I will just briefly share what I did. I think it is different for everyone. If anything, I would be conservative in how long you think it will take — and then add another 3-6 months to that conservative projection.

At a high-level, I made three initial decisions that, again in retrospect, made (I believe) a big difference — however, it didn’t feel that way until the very end. Specifically, I picked a tight target size range and I stuck with it — I never once entertained going one dollar over the high-end of my target. Two, I told everyone that I’d be “in market” for six months, and that’s it. No special closes. No opening the fund for even the most royal LPs after it was closed. Third, I told everyone that whatever I had at the end of six months would be my fund size and I would fight the war with the army I had, even if it was well below my target. I don’t know why I stuck to this all the way through, but it is what I believed in and I basically ran out of gas as the sixth month came to an end, so it was good to know I was going to stop then.

I put my rear on an airplane. A lot. I flew over 40,000 miles (all in the U.S.). I spent 28 nights away from home, two of them as red-eyes. I probably talked with and had initial calls/meetings with over 200 different institutional LPs. Naturally, most didn’t go to a second live meeting, but more than enough did. I did not wait until an LP was going to be in the Bay Area to see them — they’re usually only here for a night or two, at the most, and they have to see their current managers. I went to see them on their turf, every single time. I took notes in every meeting. I have a pretty good sense now of what LPs are in specific VC funds. Note-taking is important to remember follow-ups, to understand their network and relationship, and to send them things that interest them in their business. As they get more interested in you, they will come meet you in neutral locations or on your turf, and those signals are important markers to pay attention to.

The entire decision often rests on the quality of your references. It is very hard to quickly gin up references. You either have stellar references or you don’t. Even your biggest supporters will share your weaknesses and area for improvement with prospective LPs. Even though I’ve now raised four different funds, I couldn’t believe how much referencing went on this time. There is nowhere to hide. However you’ve treated others and behaved, it will be surfaced — for better or worse. LPs are looking at the history

It sounds corny, but I befriended a good deal of LPs who passed on me very quickly but were so nice and helpful (as people) that i sent them tips on new interesting funds and managers, some of which even led to them making an LP investment. Once you accept that most LPs are decent people and that they’re going to say “no,” it becomes easier to simply have a conversation with them. Some of them are really far away from our world of startups and VC funds; yet, some of them are incredibly deep into it and know way much more than even popular fund managers do. Sure, there will be some that you meet that you hope to never see again, but it’s a very small minority.

I held two official closes. The first close was half-way through the campaign, and most of my insiders re-upped and some super-sized. Looking back, I thought I would have more of the fund done by then — but, no, not even close. In the second-half of the campaign, I turned on the jets and just focused on the institutions. Seven business days before I was set to give it up and go with what I had, I finally got a string of institutional LP commits, like dominoes falling. It was really random. If you follow NBA history, it was like the Pacers-Knicks game where Reggie Miller hit all those three-pointers at the end. It felt like that. I got lucky, but it was really close.

Phase 3: The Closing Mechanics

I only can share some high-level learnings here:

1/ Good legal counsel matters. It is an art to line up these different LPs at the same time and on the same terms.
2/ No one but you has the deep urgency to close. You have to be fierce in pursuit of the close. Some people will get annoyed with it, but you have to close it out.
3/ Expect a major curveball. I can’t say what it will be, but expect to be taken by surprise and roll with it.

[Big disclaimerI cannot emphasize enough how many people you will need to help you and advocate for you to get over the finish line. I have been in awe of what others have done for me. All the reference calls, extra nudges over text, and pounding the table even in cases when it didn’t work out. I will go through this process and thank folks properly in the coming weeks.]

I hope this helps folks out there. I hope you’ll notice this isn’t as detailed as it could be. I think the process can be pretty simple and straightforward. Set up the materials properly, be human in the meetings, follow-up with precision, and drive people to a decision and close. Lots of folks have been asking me recently “How did you do it?” so I thought it would be only fair for me to share it more broadly and expand on key areas over the next few weeks. Good luck out there!

Read the whole story
TheRomit
42 days ago
reply
santa clara, CA
Share this story
Delete

An in-depth look at moving from iPhoto to Photos

1 Share
As noted in prior posts, I’ve recently moved to Photos from iPhoto. So far, it’s been a mixed experience. There are some elements of Photos I like, but as of today, those things are outweighed by the things I don’t like. I’ve vented on a number of the things I dislike on Twitter, but wanted […]
Read the whole story
TheRomit
43 days ago
reply
santa clara, CA
Share this story
Delete

Windows Phone was a glorious failure

1 Share

This past weekend, Microsoft made official what was already known for years: the Windows Phone mobile operating system is dead. There’ll be no further development, no miraculous Windows 10 Mobile revivals, and no further attempts to compete with the overwhelming duopoly of Apple’s iOS and Google’s Android. The new Microsoft, led by Satya Nadella, prefers collaboration over competition — or at least that’s the choice the company tells itself it has made in abandoning its thwarted mobile OS venture.

But the overall failure of Windows Phone masks a series of smaller successes and advances, which Microsoft and its hardware partners have never received enough credit for. At its outset in 2010, Windows Phone was the boldest and most original...

Continue reading…

Read the whole story
TheRomit
45 days ago
reply
santa clara, CA
Share this story
Delete

Power BI NFL Football Stats Comparisons and Analysis Report is now available!

1 Share

fantasy football data and analyticsFor the past few years I’ve combined my love of professional football and analytics by releasing a series of Power BI reports featuring player statistics. This year is no different. This year I’m finally able to release my NFL Football Stats, Comparisons, and Analysis reports featuring the stats for players at the quarterback, runningback, wide receiver, tightend, and kicker positions. Unlike previous years, this years reports are based on data from NFL.com. My goal for producing this report is, as nerdy as this is, to give me a leg up on my fantasy football drafts. If you’ve ever played fantasy football, you know the key to winning is having the deepest roster and I hope that these reports will allow me to identify middle and late round talent using the collected data in a readable and navigable format.

View the Power BI NFL Stats, Comparisons, and Analysis Reports here

I think this is probably my best version of the NFL stats report that I’ve released yet, and there’s a few reasons why I think so. First, I included comparisons between each individual player and league averages, which I hope will provide valuable insight into where each player stacks up compared to the rest of the league.

Secondly, I included colored KPIs on the Team Dashboard to indicate how a player compares to the league. For instance, if a running back has a green KPI for Yards Rushing, that means they rushed for 10% more yards compared to the rest of the league. Yellow would indicate the player rushed for between 90% and 110% of the league average, and red would mean they only rushed for less than 90% of the league average.

Also, I included filters on the analysis reports for

position groups allowing you to filter for player that obtained a certain number of yards, receptions, touchdowns, etc. So if you were interested in only looking at wide receivers that had at least 1000 yards receiving and caught at least 10 touchdowns, you could use the slider bars to accomplish that type of analysis.

The Team Dashboard

After the title page, the first page is the Team Dashboard and features an overall team analysis. You can use the filter box in the top right to filter down to one team at a time. Use the year filter to look at the stats for each team for the previous 3 seasons. The quarterback, runningback, and receiver position groups are displayed with KPIs below.

football team dashboard stats

Quarterback Analysis and Comparisons

The next two pages feature the QB Analysis and QB Comparison reports. Use the Quarterbacks Analysis page to identify top performing quarterback but also identify which quarterback might be considered above average but not top tier.

NFL football quarterbacks stats analysis

Once you’ve narrowed down your choices for a QB, use the QB Comparisons page the compare the stats between the two player.

NFL football quarterback comparison stats analysis

Runningback Analysis and Comparisons

The RB Analysis and RB Comparisons reports are very similar to the QB Analysis and Comparisons with the exception of course being that the information displayed is limited to the runningback position group. One of the interesting things easily seen below in the scatter plot is that there is no big difference between the players Howard, Murray, and McCoy when comparing rush yards and receiving yards.

NFL football runningbacks stats analysis

NFL football runningbacks comparison

Receiver Analysis and Comparisons

Then you have the Receiver Analysis and Comparisons reports displaying stats and information for receivers, including wide receivers and tight ends.

NFL receivers stats analysis

NFL football tightend stats

NFL football receivers tightends stats comparison

Kicker Analysis and Comparisons

And of course, you’ve got the Kickers, the most important position in ANY fantasy football draft.

NFL football kicker data analysis

NFL football kicker comparisons

I hope you find these Power BI reports useful. If you have any questions or feedback, feel free to leave a comment down below. If you happen to uncover and bugs, feel free to let me know!

Resources

View the Power BI NFL Football Stats Comparisons and Analysis Report here

Download the Power BI Desktop file here




























Read the whole story
TheRomit
95 days ago
reply
santa clara, CA
Share this story
Delete

The Moral Shambles That is Our President

3 Comments and 11 Shares

Denouncing Nazis and the KKK and violent white supremacists by those names should not be a difficult thing for a president to do, particularly when those groups are the instigators and proximate cause of violence in an American city, and one of their number has rammed his car through a group of counter-protestors, killing one and injuring dozens more. This is a moral gimme — something so obvious and clear and easy that a president should almost not get credit for it, any more than he should get credit for putting on pants before he goes to have a press conference.

And yet this president — our president, the current President of the United States — couldn’t manage it. The best he could manage was to fumble through a condemnation of “many sides,” as if those protesting the Nazis and the KKK and the violent white supremacists had equal culpability for the events of the day. He couldn’t manage this moral gimme, and when his apparatchiks were given an opportunity to take a mulligan on it, they doubled down instead.

This was a spectacular failure of leadership, the moral equivalent not only of missing a putt with the ball on the lip of the cup, but of taking out your favorite driver and whacking that ball far into the woods. Our president literally could not bring himself to say that Nazis and the KKK and violent white supremacists are bad. He sorely wants you to believe he implied it. But he couldn’t say it.

To be clear, when it was announced the president would address the press about Charlottesville, I wasn’t expecting much from him. He’s not a man to expect much from, in terms of presidential gravitas. But the moral bar here was so low it was on the ground, and he tripped over it anyway.

And because he did, no one — and certainly not the Nazis and the KKK and the violent white supremacists, who were hoping for the wink and nod that they got here — believes the president actually thinks there’s a problem with the Nazis and the KKK and the violent white supremacists. If he finally does get around to admitting that they are bad, he’ll do it in the same truculent, forced way that he used when he was forced to admit that yeah, sure, maybe Obama was born in the United States after all. An admission that makes it clear it’s being compelled rather than volunteered. The Nazis and the KKK and the violent white supremacists will understand what that means, too.

Our president, simply put, is a profound moral shambles. He’s a racist and sexist himself, he’s populated his administration with Nazi sympathizers and white supremacists, and is pursuing policies, from immigration to voting rights, that make white nationalists really very happy. We shouldn’t be surprised someone like him can’t pass from his lips the names of the hate groups that visited Charlottesville, but we can still be disappointed, and very very angry about it. I hate that my baseline expectation for the moral behavior of the President of the United States is “failure,” but here we are, and yesterday, as with previous 200-some days of this administration, gives no indication that this baseline expectation is unfounded.

And more than that. White supremacy is evil. Nazism is evil. The racism and hate we saw in Charlottesville yesterday is evil. The domestic terrorism that happened there yesterday — a man, motivated by racial hate, mowing down innocents — is evil. And none of what happened yesterday just happened. It happened because the Nazis and the KKK and the violent white supremacists felt emboldened. They felt emboldened because they believe that one of their own is in the White House, or at least, feel like he’s surrounded himself with enough of their own (or enough fellow travelers) that it’s all the same from a practical point of view. They believe their time has come round at last, and they believe no one is going to stop them, because one of their own has his hand on the levers of power.

When evil believes you are one of their own, and you have the opportunity to denounce it, and call it out by name, what should you do? And what should we believe of you, if you do not? What should we believe of you, if you do not, and you are President of the United States?

My president won’t call out evil by its given name. He can. But he won’t. I know what I think that means for him. I also know what I think it means for the United States. And I know what it means for me. My president won’t call out evil for what it is, but I can do better. And so can you. And so can everybody else. Our country can be better than it is now, and better than the president it has.


Read the whole story
TheRomit
102 days ago
reply
santa clara, CA
popular
102 days ago
reply
Share this story
Delete
3 public comments
LeMadChef
102 days ago
reply
The golf analogy was icing on the cake.
Denver, CO
jerkso
102 days ago
reply
You are making hoops for the president to jump through, he denounced all side for the violence. He has denounced the violence you know it, you're response is weak and without substance. Why don't you talk about the cause of this and their numbers growing? Hint it is not Trump.
Bangkok, Thailand
katster
102 days ago
“Trump comments were good. He didn’t attack us. He just said the nation should come together. Nothing specific against us….. There was virtually no counter-signaling of us at all. He said he loves us all…. No condemnation at all. When asked to condemn, he just walked out of the room. Really, really good. God bless him.” ~The Daily Stormer
boredomfestival
102 days ago
There are not multiple sides to this.
jerkso
101 days ago
There aren't multiple sides, care to explain, that seems a refutation of reality as much a commonsense. Seems both of you perhaps are living in a bubble, Quit it with the might is right reasoning in such attitudes and ignorance. Recognize the poison there on both sides.Ignoring this collectivism of any sort is going to end poorly for everyone.
torrentprime
47 days ago
“Why don’t you talk about THOSE PEOPLE and how their numbers make me think THOSE PEOPLE ARE BREEDING LIKE RABBITS. Btw, it is Make America Great Again or Make America White Again? Made ya think, didn’t I?” 🤢😷🤢😷🤢
skorgu
103 days ago
reply
GOP. Delenda. Est.
Next Page of Stories