Gadgets. Tech. Data Warehouse. Business Intelligence. NFL. College Football.
887 stories
·
49 followers

I Only Have 7 Trips Left. On Managing Work / Life Balance, Love & Family

1 Comment and 2 Shares

Like many people these days, I spent much of my 20’s and early 30’s thinking about work & fun and not too much about “the future.” Like characters from one of my favorite novels “The Unbearable Lightness of Being” life seemed very light.

My first son was born the day before my 35th birthday so the decade that followed was very heavy and consequential. Life mattered for more than my pure enjoyment — I had to be responsible for the futures of these two lovable, little boys. I still worked hard and the balance of my time and energy went into family. My relationships narrowed to a smaller set of people who really mattered to me, my number of frivolous hobbies dwindled to only the most valuable and time became my scarcest commodity. If you’ve lived a decade with young children you know that it’s both unbelievably rewarding and also physically and emotionally exhausting.

Many of my friends and colleagues also find themselves in the “sandwich years” of aging parents where responsibilities increase for your elders at the same time as for your kids and mortality becomes a reality. During this decade we lost a close family member we loved to cancer and realized that life is too short and if we didn’t take advantage of the blessings we had to spend time together we would be shortchanging ourselves and our children.

So for the past 7 years we have ramped up the amount of sibling, cousin, grandparent, extended family time we could and we have loved every minute of this. I started thinking about “how many Thanksgivings, July 4ths or holidays we really had all together” and when you do the math it is daunting.

I already had a sense of the heaviness (in a good way) of my forties when I came across this excellent post on one of my favorite blogs WaitButWhy entitled “The Tail End,” in which the author uses pictographs to bring the succinctness of life and family time to reality. The author was 34 when he wrote this and estimated that if he’s REALLY lucky he has at best 60 Super Bowls left

If I assume that I’m 10 years less lucky (and live to 84) and I happen to be 49 years old now that means the Eagles have only about 35 more tries to win their first Super Bowl. Now you can see the urgency of Carson Wentz fulfilling his full expectations! It’s on you, Carson. I’ll do my best to make it to 90 but I’m still counting on you.

But seriously I sent this Tail End article recently to my brothers and sister recently to remind them why it was so important that we all get together for Thanksgiving this year. In kid years I have just 4 more Thanksgivings until my eldest son goes to college so I don’t have many to spare. And while I fully expect my children to come back for family vacations post high school, I’m also a realist about life and an advocate of independence.

It took losing my wife’s brother to realize how little time we all had together and the importance of getting together every family vacation we could but I also look at this as the gift that Tom gave us all in our lifetimes. And I think about Tom at every family gathering whether it’s Tania’s family gathering or mine. I am now the age Tom was when he passed away (49) and I don’t take for granted the time I have on this Earth.

So last year I talked with my wife about how few “nuclear family” trips we had been able to take given all of the extended family trips that were so important to us and we committed to doing 2 nuclear family trips per year until Jacob is in college (and of course we plan to continue this for years after and we have Andy for 8 more years!).

I just returned this weekend from our 3rd of 10 trips (70% to go!) and this time I decided not to bring my computer. I put on an out-of-office notice (see below) and received some of the nicest emails and text messages including from my good friend Michael Broukhim who ensures me that he and his brother Danny still vacation with their parents and Mike & Danny are both in their 30s! (I vacationed with my parents, too, until I got engaged at 33).

I was reading the saddest story this morning about a Silicon Valley lawyer who struggled with work/life balance and stress and the pressures of modern life of keeping up with the Jones’s and competing at the top levels in tech startup life. It’s a really sad but important story that I hope you’ll read. It is written by the ex-wife of a corporate lawyer in Silicon Valley who struggled with drug addiction and trying to maintain his status atop his field with the stresses that go with this. She titled it “The Lawyer, the Addict.

I’m not perfect and like many of you still struggle with work / life balance. I was blessed in life not to have chemical dependency issues or depression but I’ve seen it all around me and take the live’s of some people I was close to. It’s why I try to write about and be available to people who suffer from depression. It’s why I try to be open about how stressful being a founder really was and how stressful being a VC is even for an obviously “privileged class” and how physically unhealthy being a founder was to me. As you will see if you read the Lawyer, Addict piece — even highly successful people can succumb to the pressures of peer expectations and relative performance that is entirely self made destruction but real nonetheless.

I love my wife and I love my children. I think some of our fondest memories will be the goofy time we spent during our travels as opposed to the planned itineraries. We’ll remember all of the games of Hearts. We’ll remember when Andy fell down the hill into the bushes (but was ok). We’ll remember throwing the football on the beach with Troy Aikman (the nicest pro football player you’ll ever meet who even with no cameras around and even once he found out we were Eagles fans was still so gracious to my boys). We’ll remember Daddy accidentally shoving an entire Serrano chili pepper into his mouth because it was dark outside and he thought it was a carrot. And we’ll remember how much time Mom spent meticulously planning with love so that our entire family could enjoy every moment.

If you’re caught on the hamster wheel, recognizing it and trying to take some actions is the first step. Having just gotten back from my first proper 2-week vacation (as opposed to extended family gathering) since 2009 I can tell you it was truly life fulfilling. I’m now ready to come back to work feeling really refreshed. As a side note if you’ve never been to Alaska it is truly one of the most beautiful, spectacular and awe-inspiring places I’ve visited. If you want to catch just a few moments of our trip you can find them on Instagram.

***

Below was my out-of-office reply in hopes of inspiring at least some of you to seek out your own work / life balanced vacations in the years ahead…

4 Years

Thank you for writing to me and forgive me for not responding right away.

About a year ago I was sitting down with my wife and talking about life and realizing that my two boys were about to pass me in height and in their minds they would soon be passing me, too, in worldly knowledge. As if! My eldest son was in 8th grade at the time. Like many of you, my wife & I worked our butts off in our 20s and 30s. When we had kids we did everything we could to balance daily existence, jobs, being great parents and, well, sleep. Every chance for a vacation was an opportunity to see grandparents, aunts & uncles, cousins and childhood friends. We love our families and cherish these visits but it’s different than nuclear-family downtime.

Now we face high school. And we realize we only have 4 years left as a nuclear family until we send Jacob to college. I’m even a bit verklempt as I type this. So Tania & I promised ourselves 2 great trips a year with just our nuclear family. 8 more nuclear-family vacations to create memories that we hope last beyond our time on this planet. We love our boys and our family and at this pivotal moment we also want to model good behavior where we don’t spend the entirety of our trip doing emails or checking Facebook.

So we’re off to Alaska. We won’t be 100% unplugged but we plan to as much as possible so we likely won’t see your email. When I get back I don’t plan to spend 50 hours processing old emails. So here are my asks

1. If it’s urgent please email xxxxxxxxxxx who will help. He really doesn’t mind — even if it’s just directing you to somebody else at Upfront who can help. If it’s future scheduling of a meeting for me please email xxxxxxxxxxx. If it really needs my attention please text me (I don’t mind) but know that we may not have perfect text messaging coverage. Jori has my itinerary and can find me. No, we’re not going on a cruise. Why does everybody always ask that when you tell them you’re going to Alaska?!?

2. If it can wait please email me again on July 15th. This is the single longest true vacation I have taken since 2009 and I can’t tell you how excited I am to recharge the batteries and crush my kids at Hearts.

3. If you find yourself today or in the future at the same life stage as I am, find a way to truly check out. You don’t get these days back. So I’m going to make the most of my 8 trips and 4 years. I hope if you’re able to you will one day, too.


I Only Have 7 Trips Left. On Managing Work / Life Balance, Love & Family was originally published in Both Sides of the Table on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
TheRomit
5 days ago
reply
santa clara, CA
Share this story
Delete
1 public comment
futurile
3 days ago
reply
I find this more depressing than uplifting. Take holidays, be unplugged from work. It's not that important, and you're not that important to it.
London

Data Virtualization: Unlocking Data for AI and Machine Learning

1 Comment

This post is authored by Robert Alexander, Senior Software Engineer at Microsoft.

For reliability, accuracy and performance, both AI and machine learning heavily rely on large sets. Because the larger the pool of data, the better you can train the models. That’s why it’s critical for big data platforms to efficiently work with different data streams and systems, regardless of the structure of the data (or lack thereof), data velocity or volume.

However, that’s easier said than done.

Today every big data platform faces these systemic challenges:

  1. Compute / Storage Overlap: Traditionally, compute and storage were never delineated. As data volumes grew, you had to invest in compute as well as storage.
  2. Non-Uniform Access of Data: Over the years, too much dependency on business operations and applications have led companies to acquire, ingest and store data in different physical systems like file systems, databases and data warehouses (e.g. SQL Server or Oracle), big data systems (e.g. Hadoop). This results in disparate systems, each with its own method of accessing data.
  3. Hardware-Bound Compute: You have your data in nice storage schema (e.g. SQL Server), but then you’re hardware constrained to execute your query as it takes several hours to complete.
  4. Remote Data: Data is either dispersed across geo-locations, or uses different underlying technology stacks (e.g. SQL Server, Oracle, Hadoop, etc.), and is stored on premises or in the cloud. This requires raw data to be physically moved to get processed, thus increasing network I/O costs.

With the advent of AI and ML, beating these challenges has become a business imperative. Data virtualization is rooted on this premise.

What’s Data Virtualization Anyway?

Data virtualization offers techniques to abstract the way we handle and access data. It allows you to manage and work with data across heterogenous streams and systems, regardless of their physical location or format. Data virtualization can be defined as a set of tools, techniques and methods that let you access and interact with data without worrying about its physical location and what compute is done on it.

For instance, say you have tons of data spread across disparate systems and want to query it all in a unified manner, but without moving the data around. That’s when you would want to leverage data virtualization techniques.

In this post, we’ll go over a few data virtualization techniques and illustrate how they make the handling of big data both efficient and easy.

Data Virtualization Architectures

Data virtualization can be illustrated using the lambda architecture implementation of the advanced analytics stack, on the Azure cloud:


Figure 1: Lambda architecture implementation using Azure platform services 

In big data processing platforms, tons of data are ingested per second, and this includes both data at rest and in motion. This big data is then collected in canonical data stores (e.g. Azure storage blob) and subsequently cleaned, partitioned, aggregated and prepared for downstream processing. Examples of downstream processing are machine learning, visualization, dashboard report generation, so forth.

This downstream processing is backed by SQL Server, and – based on the number of users – it can get overloaded when many queries are executed in parallel by competing services. To address such overload scenarios, data virtualization provides Query Scale-Out where a portion of the compute is offloaded to more powerful systems such as Hadoop clusters.

Another scenario, shown in Figure 1, involves ETL processes running in HDInsight (Hadoop) clusters. ETL transform may need access to referential data stored in SQL Server.

Data virtualization provides Hybrid Execution which allows you to query referential data from remote stores, such as on SQL Server.

Query Scale-out

What Is It?

Say you have a multi-tenant SQL Server running on a hardware constrained environment. You want to offload some of the compute to speed up queries. You also want to access the big data that won’t fit in SQL Server. These are situations where Query Scale-Out can be used.

Query Scale-out uses PolyBase technology, which was introduced in SQL Server 2016. PolyBase allows you to execute a portion of the query remotely on a faster, higher capacity big data system, such as on Hadoop clusters.

The architecture for Query Scale-out is illustrated below.


Figure 2: System-level illustration of Query Scale-Out

What Problems Does It Address?

  • Compute / Storage Overlap: You can delineate compute from storage by running queries in external clusters. You can extends SQL Server storage by enabling access of data in HDFS.
  • Hardware-Bound Compute: You can run parallel computations, leveraging faster systems.
  • Remote Data: You can keep the data where it is, only return the processed result set.

Further explore and deploy Query Scale-out using the one-click automated demo at the solution gallery.

Hybrid Execution

What Is It?

Say you have ETL processes which run on your unstructured data and then store the data in blobs. You need to join this blob data with referential data stored in a relational database. How would you uniformly access data across these distinct data sources? These are the situations in which Hybrid Execution would be used.

Hybrid Execution allows you to “push” queries to a remote system, such as to SQL Server, and access the referential data.

The architecture for Hybrid Execution is illustrated below.


Figure 3: System-level illustration of Hybrid Execution

What Problems Does It Address?

  • Non-Uniform Access of Data: You are no longer constrained by where and how data is stored.
  • Remote Data: You can access reference data from external systems, for use in downstream apps.

Further explore and deploy Hybrid Execution using the one-click automated demo at the solution gallery.

Performance Benchmarks: What Optimization Gains Can You Expect?

You may be asking yourself whether it’s worthwhile using these techniques.

Query Scale-Out makes sense when data already exists on Hadoop. Referring to Figure 1, you may not want to push all the data to HDInsight just to see the performance gain.

However, one can imagine a use case where lots of ETL processing happens in HDInsight clusters and the structured results are published to SQL Server for downstream consumption (for instance, by reporting tools). To give you an idea of the performance gains you can expect by using these techniques, here are some benchmark numbers based on the datasets used in our solution demo. These benchmarks were produced by varying the size of datasets and the size of HDInsight clusters.


Figure 4: Query execution time with and without scaling

The x axis shows the number of rows in the table used for benchmarking. The y axis shows the number of seconds the query took to execute. Note the linear increase in execution time with SQL Server only (blue line) versus when HDInsight is used with SQL Server to scale out the query execution (orange and grey lines). Another interesting observation is the flattening out of execution time of a four versus a two-worker node HDInsight cluster (grey vs. orange line).

Of course, these results are specific to the simplified dataset and schema we provide with the solution demo. With much larger real-world datasets in SQL Server, which typically runs multiple queries competing for resources, more dramatic performance gains can be expected.

The next question to ask is when does it become cost effective to switch over to using Query Scale-Out? The below chart incorporates the pricing of resources used in this experiment. You can see a detailed pricing calculation here.


Figure 5: Query execution time with and without scaling (with pricing)

You can see that with 40 million rows it’s cheapest to execute this query on SQL Server only. But by the time you are up to 160 million rows, Scale-Out becomes cheaper. This shows that as the number of rows increases, it could become cheaper to run with scaling out. You can use these types of benchmarks and calculations to help you deploy your resources with an optimal balance of performance and cost.

Try It Yourself, with One-Click Deployment

To try out the data virtualization techniques discussed in this blog post, deploy the solution demo in your Azure subscription today using the automated one-click deployment solution.

To gain a deeper understanding on how to implement data virtualization techniques, be sure to read our technical guide.

Robert

Read the whole story
TheRomit
15 days ago
reply
Good explainer of a very important technology
santa clara, CA
Share this story
Delete

Refresh Types

3 Comments and 20 Shares
The hardest refresh requires both a Mac keyboard and a Windows keyboard as a security measure, like how missile launch systems require two keys to be turned at once.
Read the whole story
TheRomit
29 days ago
reply
santa clara, CA
Share this story
Delete
3 public comments
tdarby
29 days ago
reply
lo
Baltimore, MD
Covarr
29 days ago
reply
Hard Reset - PC reset button - causes SEGA to fight SOPA.
Moses Lake, WA
alt_text_bot
29 days ago
reply
The hardest refresh requires both a Mac keyboard and a Windows keyboard as a security measure, like how missile launch systems require two keys to be turned at once.

Amazon’s New Customer

1 Comment

Back in 2006, when the iPhone was a mere rumor, Palm CEO Ed Colligan was asked if he was worried:

“We’ve learned and struggled for a few years here figuring out how to make a decent phone,” he said. “PC guys are not going to just figure this out. They’re not going to just walk in.” What if Steve Jobs’ company did bring an iPod phone to market? Well, it would probably use WiFi technology and could be distributed through the Apple stores and not the carriers like Verizon or Cingular, Colligan theorized.

I was reminded of this quote after Amazon announced an agreement to buy Whole Foods for $13.7 billion; after all, it was only two years ago that Whole Foods founder and CEO John Mackey predicted that groceries would be Amazon’s Waterloo. And while Colligan’s prediction was far worse — Apple simply left Palm in the dust, unable to compete — it is Mackey who has to call Amazon founder and CEO Jeff Bezos, the Napoleon of this little morality play, boss.

The similarities go deeper, though: both Colligan and Mackey made the same analytical mistakes: they mis-understood their opponents goals, strategies, and tactics. This is particularly easy to grok in the case of Colligan and the iPhone: Apple’s goal was not to build a phone but to build an even more personal computer; their strategy was not to add on functionality to a phone but to reduce the phone to an app; and their tactics were not to duplicate the carriers but to leverage their connection with customers to gain concessions from them.

Mackey’s misunderstanding was more subtle, and more profound: while the iPhone may be the most successful product of all time, Amazon and Jeff Bezos have their sights on being the most dominant company of all time. Start there, and this purchase makes all kinds of sense.

Amazon’s Goal

If you don’t understand a company’s goals, how can you know what the strategies and tactics will be? Unfortunately, many companies, particularly the most ambitious, aren’t as explicit as you might like. In the case of Amazon, the company stated in its 1997 S-1:

Amazon.com’s objective is to be the leading online retailer of information-based products and services, with an initial focus on books.

Even if you picked up on the fact that books were only step one (which most people at the time did not), it was hard to imagine just how all-encompassing Amazon.com would soon become; within a few years Amazon’s updated mission statement reflected the reality of the company’s e-commerce ambitions:

Our vision is to be earth’s most customer centric company; to build a place where people can come to find and discover anything they might want to buy online.

“Anything they might want to buy online” was pretty broad; the advent of Amazon Web Services a few years later showed it wasn’t broad enough, and a few years ago Amazon reduced its stated goal to just that first clause: We seek to be Earth’s most customer-centric company. There are no more bounds, and I don’t think that is an accident. As I put it on a podcast a few months ago, Amazon’s goal is to take a cut of all economic activity.

This, then, is the mistake Mackey made: while he rightly understood that Amazon was going to do everything possible to win in groceries — the category accounts for about 20% of consumer spending — he presumed that the effort would be limited to e-commerce. E-commerce, though, is a tactic; indeed, when it comes to Amazon’s current approach, it doesn’t even rise to strategy.

Amazon’s Strategy

As you might expect, given a goal as audacious as “taking a cut of all economic activity”, Amazon has several different strategies. The key to the enterprise is AWS: if it is better to build an Internet-enabled business on the public cloud, and if all businesses will soon be Internet-enabled businesses, it follows that AWS is well-placed to take a cut of all business activity.

On the consumer side the key is Prime. While Amazon has long pursued a dominant strategy in retail — superior cost and superior selection — it is difficult to build sustainable differentiation on these factors alone. After all, another retailer is only a click away.

This, though, is the brilliance of Prime: thanks to its reliability and convenience (two days shipping, sometimes faster!), plus human fallibility when it comes to considering sunk costs (you’ve already paid $99!), why even bother looking anywhere else? With Prime Amazon has created a powerful moat around consumer goods that does not depend on simply having the lowest price, because Prime customers don’t even bother to check.

This, though, is why groceries is a strategic hole: not only is it the largest retail category, it is the most persistent opportunity for other retailers to gain access to Prime members and remind them there are alternatives. That is why Amazon has been so determined in the space: AmazonFresh launched a decade ago, and unlike other Amazon experiments, has continued to receive funding along with other rumored initiatives like convenience store and grocery pick-ups. Amazon simply hasn’t been able to figure out the right tactics.

Amazon’s Tactics

To understand why groceries are such a challenge look at how they differ from books, Amazon’s first product:

  • There are far more books than can ever fit in a physical store, which means an e-commerce site can win on selection; in comparison, there simply aren’t that many grocery items (a typical grocery store will have between 30,000 and 50,000 SKUs)
  • When you order a book, you know exactly what you are getting: a book from Amazon is the same as a book from a local bookstore; groceries, on the other hand, can vary in quality not just store-to-store but, particularly in the case of perishable goods, item-to-item
  • Books can be stored in a centralized warehouse indefinitely; perishable groceries can only be stored for a limited amount of time and degrade in quality during transit

As Mackey surely understood, this meant that AmazonFresh was at a cost disadvantage to physical grocers as well: in order to be competitive AmazonFresh needed to stock a lot of perishable items; however, as long as AmazonFresh was not operating at meaningful scale a huge number of those perishable items would spoil. And, given the inherent local nature of groceries, scale needed to be achieved not on a national basis but a city one.

Groceries are a fundamentally different problem that need a fundamentally different solution; what is so brilliant about this deal, though, is that it solves the problem in a fundamentally Amazonian way.

The First-And-Best Customer

Last year in The Amazon Tax I explained how the different parts of the company — like AWS and Prime — were on a conceptual level more similar than you might think, and that said concepts were rooted in the very structure of Amazon itself. The best example is AWS, which offered server functionality as “primitives”, giving maximum flexibility for developers to build on top of:1

The “primitives” model modularized Amazon’s infrastructure, effectively transforming raw data center components into storage, computing, databases, etc. which could be used on an ad-hoc basis not only by Amazon’s internal teams but also outside developers:

stratechery Year One - 274

This AWS layer in the middle has several key characteristics:

  • AWS has massive fixed costs but benefits tremendously from economies of scale
  • The cost to build AWS was justified because the first and best customer is Amazon’s e-commerce business
  • AWS’s focus on “primitives” meant it could be sold as-is to developers beyond Amazon, increasing the returns to scale and, by extension, deepening AWS’ moat

This last point was a win-win: developers would have access to enterprise-level computing resources with zero up-front investment; Amazon, meanwhile, would get that much more scale for a set of products for which they would be the first and best customer.

As I detailed in that article, this exact same framework applies to Amazon.com:

Prime is a super experience with superior prices and superior selection, and it too feeds into a scale play. The result is a business that looks like this:

stratechery Year One - 275

That is, of course, the same structure as AWS — and it shares similar characteristics:

  • E-commerce distribution has massive fixed costs but benefits tremendously from economies of scale
  • The cost to build-out Amazon’s fulfillment centers was justified because the first and best customer is Amazon’s e-commerce business
  • That last bullet point may seem odd, but in fact 40% of Amazon’s sales (on a unit basis) are sold by 3rd-party merchants; most of these merchants leverage Fulfilled-by-Amazon, which means their goods are stored in Amazon’s fulfillment centers and covered by Prime. This increases the return to scale for Amazon’s fulfillment centers, increases the value of Prime, and deepens Amazon’s moat

As I noted in that piece, you can see the outline of similar efforts in logistics: Amazon is building out a delivery network with itself as the first-and-best customer; in the long run it seems obvious said logistics services will be exposed as a platform.

This, though, is what was missing from Amazon’s grocery efforts: there was no first-and-best customer. Absent that, and given all the limitations of groceries, AmazonFresh was doomed to be eternally sub-scale.

Whole Foods: Customer, not Retailer

This is the key to understanding the purchase of Whole Foods: to the outside it may seem that Amazon is buying a retailer. The truth, though, is that Amazon is buying a customer — the first-and-best customer that will instantly bring its grocery efforts to scale.

Today, all of the logistics that go into a Whole Foods store are for the purpose of stocking physical shelves: the entire operation is integrated. What I expect Amazon to do over the next few years is transform the Whole Foods supply chain into a service architecture based on primitives: meat, fruit, vegetables, baked goods, non-perishables (Whole Foods’ outsized reliance on store brands is something that I’m sure was very attractive to Amazon). What will make this massive investment worth it, though, is that there will be a guaranteed customer: Whole Foods Markets.

stratechery Year One - 270

In the long run, physical grocery stores will be only one of the Amazon Grocery Services’ customers: obviously a home delivery service will be another, and it will be far more efficient than a company like Instacart trying to layer on top of Whole Foods’ current integrated model.

I suspect Amazon’s ambitions stretch further, though: Amazon Grocery Services will be well-placed to start supplying restaurants too, gaining Amazon access to another big cut of economic activity. It is the AWS model, which is to say it is the Amazon model, but like AWS, the key to profitability is having a first-and-best customer able to utilize the massive investment necessary to build the service out in the first place.


I said at the beginning that Mackey mis-understood Amazon’s goals, strategies, and tactics, and while that is true, the bigger error was in misunderstanding Amazon itself: unlike Whole Foods Amazon has no desire to be a grocer, and contrary to conventional wisdom the company is not even a retailer. At its core Amazon is a services provider enabled — and protected — by scale.

Indeed, to the extent Waterloo is a valid analogy, Amazon is much more akin to the British Empire, and there is now one less obstacle to sitting astride all aspects of the economy.

  1. To be clear, AWS was not about selling extra capacity; it was new capability, and Amazon itself has slowly transitioned over time (as I understand it Amazon.com is still a hybrid) []
Read the whole story
TheRomit
30 days ago
reply
Perhaps Ben's best piece yet (and there are many to choose from), especially because how most in the media have completely misunderstood the purchase.
santa clara, CA
Share this story
Delete

How the Project Management Office Can Enable Agile Software Development

1 Share

There are many benefits of agile software development, including the ability to accelerate growth, foster developer autonomy, and respond to changing customer needs faster, all while creating a company culture that embraces innovation. But, while we’re still bickering over what is precisely agile and what precisely isn’t, some feel left behind. From middle management to whole project management offices, there are many struggling to find their place in an agile transformation.

But there is an argument for the role the project management office (PMO) can play in a company gone agile, according to scrum master Dean Latchana, who gave a talk on this subject to a skeptical audience recently at the AgiNext Conference in London.

The PMO lead can act like a CEO who encourages experimentation and verifies before moving forward.

Traditionally, the PMO in any large organization is the entity that sets the standards for that business’ projects, with the goal of saving money and improving overall efficiency. Initially, the PMO may seem like the office that would slow agile software development with time-slowing requirements. But the office can also do the opposite,

A few people have mentioned that maybe it’s an oxymoron,” he said of the agile PMO, but, “rather than the PMO [seeming] like a lowly dinosaur, that business function can really reorganize itself to be at the center of the organization.”

In fact, not only does Latchana not see the project manager’s role as obsolete but rather he argues this department associated with long Waterfall charts could actually lead the agile revolution. And he offered the different organizational approaches that can help that happen.

How the Project Management Office Can Approach Agile

Latchana began his talk with two statements:

  • Organizations are dying. Some know it. Some are naive to it.
  • The PMO can support your organization’s revolution to survive and thrive.

The organizations he lists in this sort of existential crisis include media, banking, finance, personal transport (like taxis), professional services (like legal and real estate), retail, and even B2B markets. 

“I believe the PMO has the ability to help their colleagues recognize uncertainty and work towards it, and use it as a way to build new business models,” he said.

Latchana sees the project management office as well-suited to “sense and respond” to these external threats if they can move to a more agile and lean mindset, and to “pivot for market or even internal response.”

An “Agile mindset goes beyond tools, practices and methodologies,” he asserted, contending that the PMO can help “reexamine our business models for delivering value to our customers or stakeholders and examine operations” by employing one or more of these different project management approaches:

#1: Push Authority to Information

This is a simple approach that doesn’t fully embrace agile’s flat organizational structure but starts to pick away at it. Following a “trust then verify” approach, he suggests that the PMO helps push authority and decision making down one level, which can then take a cycle down from six to four weeks.

#2: Palchinsky’s Principles

This approach borrows from Peter Palchinksy’s three industrial design principles:

  1. Variation: Look for and try new ideas.
  2. Survivability: Experiment on a scale that you could survive that failure.
  3. Selection: Seek feedback and learn from your mistakes.

“If you have a culture where you can actually be more experimental, you can actually test those functions,” Latchana said.

#3: Innovation Portfolio Management

As illustrated below, the innovation portfolio management approach starts with a lot of ideas that you explore in a fast-paced agile and lean way. Then you choose perhaps one in ten to exploit and experiment with. Then sustain your business with the best practices and products that drive profit, which then, if they don’t, you can retire them out of commission. He argues that the project management office is best equipped to lead these whittle-down processes.

#4: Little’s Law

Little’s Law provides the following equation:

Cycle Time = Queue Size divided by Completion Rate

Basically, as Latchana explained, “The smaller we can get the queue size down, and the higher we can get our efficiency done, the faster we are.” He supports Little’s Law with a side of Palchinksy’s Principles. The PMO is supposed to draw out all the bottlenecks and backlogs that are slowing things down, and help lead the way in integrating services and production lines.

The image below illustrates how one company created an enormous web that illustrates its backlog, with wedged section representing a team and each concentric circle representing a quarter farther away.

Agile PM Approach #5: Governance Principles

Working with U.K. government, Latchana discovered the following governance principles for service delivery:

  1. Don’t slow down delivery.
  2. Decisions, when they’re needed, must be delivered at the right level.
  3. Do it with the right people.
  4. Go see for yourself.
  5. Only do it if it adds value.

Similar to the “Push Authority to Information” approach described above, there is some trust-and-verify happening here as well. Latchana said that a good leader recognizes that even though you “know you’re right,” you are willing for other people in the organization have the room to fail and learn from it. He continued that the PMO lead acts like a mini-CEO that encourages experimentation and verifies before moving forward, ideally following these guidelines:

  • Follows hypothesis-driven development.
  • Develops models for handling uncertainty.
  • Meets developers where they are (don’t let a PMO be a very separate office).
  • Learns from and nurtures survival anxiety and learning anxiety.
  • Is careful to avoid bimodalism.

Any of these five approaches can work in combination with an agile and lean mindset and even in combination with each other, and everything is a sort of hypothesis-driven transformation which can follow a template similar to this:

We believe that

[Supporting this change]

[For these people]

Will achieve [this outcome]

We will know we are successful when we see

[This signal within our organization/market]

The post How the Project Management Office Can Enable Agile Software Development appeared first on The New Stack.

Read the whole story
TheRomit
32 days ago
reply
santa clara, CA
Share this story
Delete

Crypto Tokens: A Breakthrough in Open Network Design

1 Share

It is a wonderful accident of history that the internet and web were created as open platforms that anyone — users, developers, organizations — could access equally. Among other things, this allowed independent developers to build products that quickly gained widespread adoption. Google started in a Menlo Park garage and Facebook started in a Harvard dorm room. They competed on a level playing field because they were built on decentralized networks governed by open protocols.

Today, tech companies like Facebook, Google, Amazon, and Apple are stronger than ever, whether measured by market cap, share of top mobile apps, or pretty much any other common measure.

Big 4 tech companies dominate smartphone apps (source); while their market caps continue to rise (source)

These companies also control massive proprietary developer platforms. The dominant operating systems — iOS and Android — charge 30% payment fees and exert heavy influence over app distribution. The dominant social networks tightly restrict access, hindering the ability of third-party developers to scale. Startups and independent developers are increasingly competing from a disadvantaged position.

A potential way to reverse this trend are crypto tokens — a new way to design open networks that arose from the cryptocurrency movement that began with the introduction of Bitcoin in 2008 and accelerated with the introduction of Ethereum in 2014. Tokens are a breakthrough in open network design that enable: 1) the creation of open, decentralized networks that combine the best architectural properties of open and proprietary networks, and 2) new ways to incentivize open network participants, including users, developers, investors, and service providers. By enabling the development of new open networks, tokens could help reverse the centralization of the internet, thereby keeping it accessible, vibrant and fair, and resulting in greater innovation.

Crypto tokens: unbundling Bitcoin

Bitcoin was introduced in 2008 with the publication of Satoshi Nakamoto’s landmark paper that proposed a novel, decentralized payment system built on an underlying technology now known as a blockchain. Most fans of Bitcoin (including me) mistakenly thought Bitcoin was solely a breakthrough in financial technology. (It was easy to make this mistake: Nakamoto himself called it a “p2p payment system.”)

2009: Satoshi Nakamoto’s forum post announcing Bitcoin

In retrospect, Bitcoin was really two innovations: 1) a store of value for people who wanted an alternative to the existing financial system, and 2) a new way to develop open networks. Tokens unbundle the latter innovation from the former, providing a general method for designing and growing open networks.

Networks — computing networks, developer platforms, marketplaces, social networks, etc — have always been a powerful part of the promise of the internet. Tens of thousands of networks have been incubated by developers and entrepreneurs, yet only a very small percentage of those have survived, and most of those were owned and controlled by private companies. The current state of the art of network development is very crude. It often involves raising money (venture capital is a common source of funding) and then spending it on paid marketing and other channels to overcome the “bootstrap problem” — the problem that networks tend to only become useful when they reach a critical mass of users. In the rare cases where networks succeed, the financial returns tend to accrue to the relatively small number of people who own equity in the network. Tokens offer a better way.

Ethereum, introduced in 2014 and launched in 2015, was the first major non-Bitcoin token network. The lead developer, Vitalik Buterin, had previously tried to create smart contract languages on top of the Bitcoin blockchain. Eventually he realized that (by design, mostly) Bitcoin was too limited, so a new approach was needed.

2014: Vitalik Buterin’s forum post announcing Ethereum

Ethereum is a network that allows developers to run “smart contracts” — snippets of code submitted by developers that are executed by a distributed network of computers. Ethereum has a corresponding token called Ether that can be purchased, either to hold for financial purposes or to use by purchasing computing power (known as “gas”) on the network. Tokens are also given out to “miners” which are the computers on the decentralized network that execute smart contract code (you can think of miners as playing the role of cloud hosting services like AWS). Third-party developers can write their own applications that live on the network, and can charge Ether to generate revenue.

Ethereum is inspiring a new wave of token networks. (It also provided a simple way for new token networks to launch on top of the Ethereum network, using a standard known as ERC20). Developers are building token networks for a wide range of use cases, including distributed computing platforms, prediction and financial markets, incentivized content creation networks, and attention and advertising networks. Many more networks will be invented and launched in the coming months and years.

Below I walk through the two main benefits of the token model, the first architectural and the second involving incentives.

Tokens enable the management and financing of open services

Proponents of open systems never had an effective way to manage and fund operating services, leading to a significant architectural disadvantage compared to their proprietary counterparts. This was particularly evident during the last internet mega-battle between open and closed networks: the social wars of the late 2000s. As Alexis Madrigal recently wrote, back in 2007 it looked like open networks would dominate going forward:

In 2007, the web people were triumphant. Sure, the dot-com boom had busted, but empires were being built out of the remnant swivel chairs and fiber optic cables and unemployed developers. Web 2.0 was not just a temporal description, but an ethos. The web would be open. A myriad of services would be built, communicating through APIs, to provide the overall internet experience.

But with the launch of the iPhone and the rise of smartphones, proprietary networks quickly won out:

As that world-historical explosion began, a platform war came with it. The Open Web lost out quickly and decisively. By 2013, Americans spent about as much of their time on their phones looking at Facebook as they did the whole rest of the open web.

Why did open social protocols get so decisively defeated by proprietary social networks? The rise of smartphones was only part of the story. Some open protocols — like email and the web — survived the transition to the mobile era. Open protocols relating to social networks were high quality and abundant (e.g. RSS, FOAF, XFN, OpenID). What the open side lacked was a mechanism for encapsulating software, databases, and protocols together into easy-to-use services.

For example, in 2007, Wired magazine ran an article in which they tried to create their own social network using open tools:

For the last couple of weeks, Wired News tried to roll its own Facebook using free web tools and widgets. We came close, but we ultimately failed. We were able to recreate maybe 90 percent of Facebook’s functionality, but not the most important part — a way to link people and declare the nature of the relationship.

Some developers proposed solving this problem by creating a database of social graphs run by a non-profit organization:

Establish a non-profit and open source software (with copyrights held by the non-profit) which collects, merges, and redistributes the graphs from all other social network sites into one global aggregated graph. This is then made available to other sites (or users) via both public APIs (for small/casual users) and downloadable data dumps, with an update stream / APIs, to get iterative updates to the graph (for larger users).

These open schemes required widespread coordination among standards bodies, server operators, app developers, and sponsoring organizations to mimic the functionality that proprietary services could provide all by themselves. As a result, proprietary services were able to create better user experiences and iterate much faster. This led to faster growth, which in turn led to greater investment and revenue, which then fed back into product development and further growth. Thus began a flywheel that drove the meteoric rise of proprietary social networks like Facebook and Twitter.

Had the token model for network development existed back in 2007, the playing field would have been much more level. First, tokens provide a way not only to define a protocol, but to fund the operating expenses required to host it as a service. Bitcoin and Ethereum have tens of thousands of servers around the world (“miners”) that run their networks. They cover the hosting costs with built-in mechanisms that automatically distribute token rewards to computers on the network (“mining rewards”).

There are over 20,000 Ethereum nodes around the world (source)

Second, tokens provide a model for creating shared computing resources (including databases, compute, and file storage) while keeping the control of those resources decentralized (and without requiring an organization to maintain them). This is the blockchain technology that has been talked about so much. Blockchains would have allowed shared social graphs to be stored on a decentralized network. It would have been easy for the Wired author to create an open social network using the tools available today.

Tokens align incentives among network participants

Some of the fiercest battles in tech are between complements. There were, for example, hundreds of startups that tried to build businesses on the APIs of social networks only to have the terms change later on, forcing them to pivot or shut down. Microsoft’s battles with complements like Netscape and Intuit are legendary. Battles within ecosystems are so common and drain so much energy that business books are full of frameworks for how one company can squeeze profits from adjacent businesses (e.g. Porter’s five forces model).

Token networks remove this friction by aligning network participants to work together toward a common goal— the growth of the network and the appreciation of the token. This alignment is one of the main reasons Bitcoin continues to defy skeptics and flourish, even while new token networks like Ethereum have grown along side it.

Moreover, well-designed token networks include an efficient mechanism to incentivize network participants to overcome the bootstrap problem that bedevils traditional network development. For example, Steemit is a decentralized Reddit-like token network that makes payments to users who post and upvote articles. When Steemit launched last year, the community was pleasantly surprised when they made their first significant payout to users.

Tokens help overcome the bootstrap problem by adding financial utility when application utility is low

This in turn led to the appreciation of Steemit tokens, which increased future payouts, leading to a virtuous cycle where more users led to more investment, and vice versa. Steemit is still a beta project and has since had mixed results, but was an interesting experiment in how to generalize the mutually reinforcing interaction between users and investors that Bitcoin and Ethereum first demonstrated.

A lot of attention has been paid to token pre-sales (so-called “ICOs”), but they are just one of multiple ways in which the token model innovates on network incentives. A well-designed token network carefully manages the distribution of tokens across all five groups of network participants (users, core developers, third-party developers, investors, service providers) to maximize the growth of the network.

One way to think about the token model is to imagine if the internet and web hadn’t been funded by governments and universities, but instead by a company that raised money by selling off domain names. People could buy domain names either to use them or as an investment (collectively, domain names are worth tens of billions of dollars today). Similarly, domain names could have been given out as rewards to service providers who agreed to run hosting services, and to third-party developers who supported the network. This would have provided an alternative way to finance and accelerate the development of the internet while also aligning the incentives of the various network participants.

The open network movement

The cryptocurrency movement is the spiritual heir to previous open computing movements, including the open source software movement led most visibly by Linux, and the open information movement led most visibly by Wikipedia.

1991: Linus Torvalds’ forum post announcing Linux; 2001: the first Wikipedia page

Both of these movements were once niche and controversial. Today Linux is the dominant worldwide operating system, and Wikipedia is the most popular informational website in the world.

Crypto tokens are currently niche and controversial. If present trends continue, they will soon be seen as a breakthrough in the design and development of open networks, combining the societal benefits of open protocols with the financial and architectural benefits of proprietary networks. They are also an extremely promising development for those hoping to keep the internet accessible to entrepreneurs, developers, and other independent creators.

Read the whole story
TheRomit
40 days ago
reply
santa clara, CA
Share this story
Delete
Next Page of Stories