Gadgets. Tech. Data Warehouse. Business Intelligence. NFL. College Football.
883 stories
·
48 followers

How the Project Management Office Can Enable Agile Software Development

1 Share

There are many benefits of agile software development, including the ability to accelerate growth, foster developer autonomy, and respond to changing customer needs faster, all while creating a company culture that embraces innovation. But, while we’re still bickering over what is precisely agile and what precisely isn’t, some feel left behind. From middle management to whole project management offices, there are many struggling to find their place in an agile transformation.

But there is an argument for the role the project management office (PMO) can play in a company gone agile, according to scrum master Dean Latchana, who gave a talk on this subject to a skeptical audience recently at the AgiNext Conference in London.

The PMO lead can act like a CEO who encourages experimentation and verifies before moving forward.

Traditionally, the PMO in any large organization is the entity that sets the standards for that business’ projects, with the goal of saving money and improving overall efficiency. Initially, the PMO may seem like the office that would slow agile software development with time-slowing requirements. But the office can also do the opposite,

A few people have mentioned that maybe it’s an oxymoron,” he said of the agile PMO, but, “rather than the PMO [seeming] like a lowly dinosaur, that business function can really reorganize itself to be at the center of the organization.”

In fact, not only does Latchana not see the project manager’s role as obsolete but rather he argues this department associated with long Waterfall charts could actually lead the agile revolution. And he offered the different organizational approaches that can help that happen.

How the Project Management Office Can Approach Agile

Latchana began his talk with two statements:

  • Organizations are dying. Some know it. Some are naive to it.
  • The PMO can support your organization’s revolution to survive and thrive.

The organizations he lists in this sort of existential crisis include media, banking, finance, personal transport (like taxis), professional services (like legal and real estate), retail, and even B2B markets. 

“I believe the PMO has the ability to help their colleagues recognize uncertainty and work towards it, and use it as a way to build new business models,” he said.

Latchana sees the project management office as well-suited to “sense and respond” to these external threats if they can move to a more agile and lean mindset, and to “pivot for market or even internal response.”

An “Agile mindset goes beyond tools, practices and methodologies,” he asserted, contending that the PMO can help “reexamine our business models for delivering value to our customers or stakeholders and examine operations” by employing one or more of these different project management approaches:

#1: Push Authority to Information

This is a simple approach that doesn’t fully embrace agile’s flat organizational structure but starts to pick away at it. Following a “trust then verify” approach, he suggests that the PMO helps push authority and decision making down one level, which can then take a cycle down from six to four weeks.

#2: Palchinsky’s Principles

This approach borrows from Peter Palchinksy’s three industrial design principles:

  1. Variation: Look for and try new ideas.
  2. Survivability: Experiment on a scale that you could survive that failure.
  3. Selection: Seek feedback and learn from your mistakes.

“If you have a culture where you can actually be more experimental, you can actually test those functions,” Latchana said.

#3: Innovation Portfolio Management

As illustrated below, the innovation portfolio management approach starts with a lot of ideas that you explore in a fast-paced agile and lean way. Then you choose perhaps one in ten to exploit and experiment with. Then sustain your business with the best practices and products that drive profit, which then, if they don’t, you can retire them out of commission. He argues that the project management office is best equipped to lead these whittle-down processes.

#4: Little’s Law

Little’s Law provides the following equation:

Cycle Time = Queue Size divided by Completion Rate

Basically, as Latchana explained, “The smaller we can get the queue size down, and the higher we can get our efficiency done, the faster we are.” He supports Little’s Law with a side of Palchinksy’s Principles. The PMO is supposed to draw out all the bottlenecks and backlogs that are slowing things down, and help lead the way in integrating services and production lines.

The image below illustrates how one company created an enormous web that illustrates its backlog, with wedged section representing a team and each concentric circle representing a quarter farther away.

Agile PM Approach #5: Governance Principles

Working with U.K. government, Latchana discovered the following governance principles for service delivery:

  1. Don’t slow down delivery.
  2. Decisions, when they’re needed, must be delivered at the right level.
  3. Do it with the right people.
  4. Go see for yourself.
  5. Only do it if it adds value.

Similar to the “Push Authority to Information” approach described above, there is some trust-and-verify happening here as well. Latchana said that a good leader recognizes that even though you “know you’re right,” you are willing for other people in the organization have the room to fail and learn from it. He continued that the PMO lead acts like a mini-CEO that encourages experimentation and verifies before moving forward, ideally following these guidelines:

  • Follows hypothesis-driven development.
  • Develops models for handling uncertainty.
  • Meets developers where they are (don’t let a PMO be a very separate office).
  • Learns from and nurtures survival anxiety and learning anxiety.
  • Is careful to avoid bimodalism.

Any of these five approaches can work in combination with an agile and lean mindset and even in combination with each other, and everything is a sort of hypothesis-driven transformation which can follow a template similar to this:

We believe that

[Supporting this change]

[For these people]

Will achieve [this outcome]

We will know we are successful when we see

[This signal within our organization/market]

The post How the Project Management Office Can Enable Agile Software Development appeared first on The New Stack.

Read the whole story
TheRomit
1 day ago
reply
santa clara, CA
Share this story
Delete

Crypto Tokens: A Breakthrough in Open Network Design

1 Share

It is a wonderful accident of history that the internet and web were created as open platforms that anyone — users, developers, organizations — could access equally. Among other things, this allowed independent developers to build products that quickly gained widespread adoption. Google started in a Menlo Park garage and Facebook started in a Harvard dorm room. They competed on a level playing field because they were built on decentralized networks governed by open protocols.

Today, tech companies like Facebook, Google, Amazon, and Apple are stronger than ever, whether measured by market cap, share of top mobile apps, or pretty much any other common measure.

Big 4 tech companies dominate smartphone apps (source); while their market caps continue to rise (source)

These companies also control massive proprietary developer platforms. The dominant operating systems — iOS and Android — charge 30% payment fees and exert heavy influence over app distribution. The dominant social networks tightly restrict access, hindering the ability of third-party developers to scale. Startups and independent developers are increasingly competing from a disadvantaged position.

A potential way to reverse this trend are crypto tokens — a new way to design open networks that arose from the cryptocurrency movement that began with the introduction of Bitcoin in 2008 and accelerated with the introduction of Ethereum in 2014. Tokens are a breakthrough in open network design that enable: 1) the creation of open, decentralized networks that combine the best architectural properties of open and proprietary networks, and 2) new ways to incentivize open network participants, including users, developers, investors, and service providers. By enabling the development of new open networks, tokens could help reverse the centralization of the internet, thereby keeping it accessible, vibrant and fair, and resulting in greater innovation.

Crypto tokens: unbundling Bitcoin

Bitcoin was introduced in 2008 with the publication of Satoshi Nakamoto’s landmark paper that proposed a novel, decentralized payment system built on an underlying technology now known as a blockchain. Most fans of Bitcoin (including me) mistakenly thought Bitcoin was solely a breakthrough in financial technology. (It was easy to make this mistake: Nakamoto himself called it a “p2p payment system.”)

2009: Satoshi Nakamoto’s forum post announcing Bitcoin

In retrospect, Bitcoin was really two innovations: 1) a store of value for people who wanted an alternative to the existing financial system, and 2) a new way to develop open networks. Tokens unbundle the latter innovation from the former, providing a general method for designing and growing open networks.

Networks — computing networks, developer platforms, marketplaces, social networks, etc — have always been a powerful part of the promise of the internet. Tens of thousands of networks have been incubated by developers and entrepreneurs, yet only a very small percentage of those have survived, and most of those were owned and controlled by private companies. The current state of the art of network development is very crude. It often involves raising money (venture capital is a common source of funding) and then spending it on paid marketing and other channels to overcome the “bootstrap problem” — the problem that networks tend to only become useful when they reach a critical mass of users. In the rare cases where networks succeed, the financial returns tend to accrue to the relatively small number of people who own equity in the network. Tokens offer a better way.

Ethereum, introduced in 2014 and launched in 2015, was the first major non-Bitcoin token network. The lead developer, Vitalik Buterin, had previously tried to create smart contract languages on top of the Bitcoin blockchain. Eventually he realized that (by design, mostly) Bitcoin was too limited, so a new approach was needed.

2014: Vitalik Buterin’s forum post announcing Ethereum

Ethereum is a network that allows developers to run “smart contracts” — snippets of code submitted by developers that are executed by a distributed network of computers. Ethereum has a corresponding token called Ether that can be purchased, either to hold for financial purposes or to use by purchasing computing power (known as “gas”) on the network. Tokens are also given out to “miners” which are the computers on the decentralized network that execute smart contract code (you can think of miners as playing the role of cloud hosting services like AWS). Third-party developers can write their own applications that live on the network, and can charge Ether to generate revenue.

Ethereum is inspiring a new wave of token networks. (It also provided a simple way for new token networks to launch on top of the Ethereum network, using a standard known as ERC20). Developers are building token networks for a wide range of use cases, including distributed computing platforms, prediction and financial markets, incentivized content creation networks, and attention and advertising networks. Many more networks will be invented and launched in the coming months and years.

Below I walk through the two main benefits of the token model, the first architectural and the second involving incentives.

Tokens enable the management and financing of open services

Proponents of open systems never had an effective way to manage and fund operating services, leading to a significant architectural disadvantage compared to their proprietary counterparts. This was particularly evident during the last internet mega-battle between open and closed networks: the social wars of the late 2000s. As Alexis Madrigal recently wrote, back in 2007 it looked like open networks would dominate going forward:

In 2007, the web people were triumphant. Sure, the dot-com boom had busted, but empires were being built out of the remnant swivel chairs and fiber optic cables and unemployed developers. Web 2.0 was not just a temporal description, but an ethos. The web would be open. A myriad of services would be built, communicating through APIs, to provide the overall internet experience.

But with the launch of the iPhone and the rise of smartphones, proprietary networks quickly won out:

As that world-historical explosion began, a platform war came with it. The Open Web lost out quickly and decisively. By 2013, Americans spent about as much of their time on their phones looking at Facebook as they did the whole rest of the open web.

Why did open social protocols get so decisively defeated by proprietary social networks? The rise of smartphones was only part of the story. Some open protocols — like email and the web — survived the transition to the mobile era. Open protocols relating to social networks were high quality and abundant (e.g. RSS, FOAF, XFN, OpenID). What the open side lacked was a mechanism for encapsulating software, databases, and protocols together into easy-to-use services.

For example, in 2007, Wired magazine ran an article in which they tried to create their own social network using open tools:

For the last couple of weeks, Wired News tried to roll its own Facebook using free web tools and widgets. We came close, but we ultimately failed. We were able to recreate maybe 90 percent of Facebook’s functionality, but not the most important part — a way to link people and declare the nature of the relationship.

Some developers proposed solving this problem by creating a database of social graphs run by a non-profit organization:

Establish a non-profit and open source software (with copyrights held by the non-profit) which collects, merges, and redistributes the graphs from all other social network sites into one global aggregated graph. This is then made available to other sites (or users) via both public APIs (for small/casual users) and downloadable data dumps, with an update stream / APIs, to get iterative updates to the graph (for larger users).

These open schemes required widespread coordination among standards bodies, server operators, app developers, and sponsoring organizations to mimic the functionality that proprietary services could provide all by themselves. As a result, proprietary services were able to create better user experiences and iterate much faster. This led to faster growth, which in turn led to greater investment and revenue, which then fed back into product development and further growth. Thus began a flywheel that drove the meteoric rise of proprietary social networks like Facebook and Twitter.

Had the token model for network development existed back in 2007, the playing field would have been much more level. First, tokens provide a way not only to define a protocol, but to fund the operating expenses required to host it as a service. Bitcoin and Ethereum have tens of thousands of servers around the world (“miners”) that run their networks. They cover the hosting costs with built-in mechanisms that automatically distribute token rewards to computers on the network (“mining rewards”).

There are over 20,000 Ethereum nodes around the world (source)

Second, tokens provide a model for creating shared computing resources (including databases, compute, and file storage) while keeping the control of those resources decentralized (and without requiring an organization to maintain them). This is the blockchain technology that has been talked about so much. Blockchains would have allowed shared social graphs to be stored on a decentralized network. It would have been easy for the Wired author to create an open social network using the tools available today.

Tokens align incentives among network participants

Some of the fiercest battles in tech are between complements. There were, for example, hundreds of startups that tried to build businesses on the APIs of social networks only to have the terms change later on, forcing them to pivot or shut down. Microsoft’s battles with complements like Netscape and Intuit are legendary. Battles within ecosystems are so common and drain so much energy that business books are full of frameworks for how one company can squeeze profits from adjacent businesses (e.g. Porter’s five forces model).

Token networks remove this friction by aligning network participants to work together toward a common goal— the growth of the network and the appreciation of the token. This alignment is one of the main reasons Bitcoin continues to defy skeptics and flourish, even while new token networks like Ethereum have grown along side it.

Moreover, well-designed token networks include an efficient mechanism to incentivize network participants to overcome the bootstrap problem that bedevils traditional network development. For example, Steemit is a decentralized Reddit-like token network that makes payments to users who post and upvote articles. When Steemit launched last year, the community was pleasantly surprised when they made their first significant payout to users.

Tokens help overcome the bootstrap problem by adding financial utility when application utility is low

This in turn led to the appreciation of Steemit tokens, which increased future payouts, leading to a virtuous cycle where more users led to more investment, and vice versa. Steemit is still a beta project and has since had mixed results, but was an interesting experiment in how to generalize the mutually reinforcing interaction between users and investors that Bitcoin and Ethereum first demonstrated.

A lot of attention has been paid to token pre-sales (so-called “ICOs”), but they are just one of multiple ways in which the token model innovates on network incentives. A well-designed token network carefully manages the distribution of tokens across all five groups of network participants (users, core developers, third-party developers, investors, service providers) to maximize the growth of the network.

One way to think about the token model is to imagine if the internet and web hadn’t been funded by governments and universities, but instead by a company that raised money by selling off domain names. People could buy domain names either to use them or as an investment (collectively, domain names are worth tens of billions of dollars today). Similarly, domain names could have been given out as rewards to service providers who agreed to run hosting services, and to third-party developers who supported the network. This would have provided an alternative way to finance and accelerate the development of the internet while also aligning the incentives of the various network participants.

The open network movement

The cryptocurrency movement is the spiritual heir to previous open computing movements, including the open source software movement led most visibly by Linux, and the open information movement led most visibly by Wikipedia.

1991: Linus Torvalds’ forum post announcing Linux; 2001: the first Wikipedia page

Both of these movements were once niche and controversial. Today Linux is the dominant worldwide operating system, and Wikipedia is the most popular informational website in the world.

Crypto tokens are currently niche and controversial. If present trends continue, they will soon be seen as a breakthrough in the design and development of open networks, combining the societal benefits of open protocols with the financial and architectural benefits of proprietary networks. They are also an extremely promising development for those hoping to keep the internet accessible to entrepreneurs, developers, and other independent creators.

Read the whole story
TheRomit
9 days ago
reply
santa clara, CA
Share this story
Delete

Microsoft Draft Offers Kubernetes Support for Developers

1 Share

Containers make it easier to deploy applications with all their libraries and dependencies, though in many cases organizations do have to change their workflow to accommodate the new technology. That can cause adoption of container technology to stall inside organizations when the change is driven by operations, noted Gabe Monroy, the Microsoft’s lead program manager for containers on Azure.

This is a problem Monroy believes a new open-source Kubernetes deployment tool called Draft can fix. Monroy is the former chief technology officer for Deis, a company that Microsoft is in the process of acquiring. The company unveiled this technology at the CoreOS user conference, being held this week in San Francisco.

“Draft is solving what I think is the number one problem facing organizations that are trying to adopt containers at scale. When the operations and IT teams in a company have bought into the idea of containers and they stand up Kubernetes clusters and have some initial wins, they turn around ready to unleash this to a team of a thousand Java developers — and the reaction they get is like deer in the headlights. It’s too complicated, it’s too much conceptual overhead; this is just too hard for us.

In other words, operations teams need to make Kubernetes easier and more palatable for software teams.

Draft reduces that conceptual overhead by taking away most of the requirements for using Kubernetes; developers don’t even need to have Docker or Kubernetes on their laptop — just the Draft binary.

“You start writing your app in any language, like Node.js, you start scaffolding it and when you’re ready to see if it can run in the Kubernetes environment, in the sandbox, you just type ‘draft create’ and the tool detects what the language is and it writes out the Dockerfile and the Helm chart into the source tree. So it scaffolds out and containerizes the app for you.”

Helm is the Kubernetes package manager developed and supported by Deis.

That language detection is based on configurable Draft “packs,” which contain a detection script, a Dockerfile and a Helm Chart. By default, Draft comes with packs for Python, Node.js, Java, Ruby, PHP and Go. Microsoft is likely to come out with more packs — TypeScript is under consideration — but Monroy expects the community to build more packs to support different languages, frameworks and runtimes.

“One of the benefits of microservices is allowing teams to pick the right language and framework for the job. Packs are extremely simple, so teams can and will customize them for their environment. Large customers want the ability to say ‘here is our Java 7 environment and our node environment that are blessed by the operations team and we don’t want developers to do anything else.’ Draft packs allow that kind of customization and control.”

The container can be a Windows or Linux Docker container; “we’re targeting all the platforms,” Monroy confirmed, and in time that will include Linux Docker containers running on Windows 10 and Windows Server 2016 directly through Hyper-V (rather than in a virtual machine).

The second new command for developers is “draft up.” This command ships the source code — including the new bits that containerize the app — to Kubernetes, remotely builds the Docker images, and deploys it into a developer sandbox using the Helm chart,” Monroy said.  The developer gets a URL to visit their app and see it live.”

The Docker registry details needed for that will have been set up by the operations team as part of providing Draft to developers.

“Now you go into to your IDE, whatever IDE that is, make a change and save your code. The minute that save happens, Draft detects it and redeploys up to the Kubernetes cluster and it’s available in seconds,’ Monroy said. Those commands could easily be integrated into an IDE directly, but either way, it’s a much smaller change to a developer workflow than targeting Docker or Kubernetes directly.

Build and Test

For a developer, Monroy says this will feel rather like using platform services. “With PaaS, a developer just writes code and that code goes to the cloud. This is almost like a client-side PaaS that writes your deployment and configuration info to the repo. But because Kubernetes can model anything, you could use this to stand up Cassandra or WordPress or something that PaaS systems have a lot of trouble with. Things with stateful components can be written with Draft; it can model volumes as easily as cloud applications.”

Draft is aimed at the “innerloop” of developer workflow, said Monroy; “While developers are writing code but before they commit changes to version control. Once they’re happy with their changes, they can commit those to source control.”

Writing build and deployment configuration to the source tree makes Draft a better fit for the kind of continuous integration and development pipelines that drive DevOps than PaaS, especially when it comes to build testing.

Typically, PaaS systems have not integrated well with continuous integration and deployment pipelines where developers check code into source control and then the continuous integration system pulls it out and builds it, and tests it and stages it and then it gets rolled to production.

“Draft solves this because it drops the configuration into the source control repo and the continuous integration pipeline can pick it up from there,” Monroy said.

There are few other tools aimed at helping fit Docker into the developer workflow the way Draft does. “We see a lot of people use Docker Compose but the problem is that requires Docker on a laptop, which not every organization is willing to roll out across their entire fleet,” he noted. Docker Compose and tools like Bitnami Kompose use the Docker data model; Draft uses the Kubernetes data model, which Monroy called “much richer and much higher fidelity”.

Draft ships the entire source tree to the Kubernetes cluster and builds the containers there, which is how it gets around the need to have Docker on the developer’s system. “If you have a massive repo there could be some latency there,” warned Monroy. If that’s an issue, Draft can work equally well with a Kubernetes cluster on a laptop though, and for some organizations, it will replace even slower processes.

One large company wanting to move its 10,000-plus Java developers to Kubernetes has been using Cloud Foundry and cf push; they’re very keen to use Draft instead.

Developer Productivity

Draft takes one step towards solving an issue Azure Container Service architect Brendan Burns calls “empty orchestrator syndrome; ‘we’re totally deploying Kubernetes, we’ve deployed Kubernetes — now what?’”

“The real problem I think, is that it’s still too hard to build applications. We have a lot of the pieces but we haven’t started to actually show a way things could come together,” Burns said. Draft fits in neatly with developments like service brokers and managing secrets for applications in containers and improving developer productivity with features like remote debugging in the cloud.

Draft is only intended to be one building block in a composable system, though. “We wanted to build a tool that did one thing, one workflow, and did it well,” Monroy told us. “It does help facilitate the pipeline view of the universe, and we have other things in mind for the other parts of the workflow where a CI system picks up code.”

The Cloud Native Computing Foundation is a sponsor of The New Stack.

Feature image: CoreOS’ Alex Polvi and Microsoft’s Gabe Monroy (partially hidden) demonstrate Draft at CoreOS Fest. Photo by Alex Williams.

The post Microsoft Draft Offers Kubernetes Support for Developers appeared first on The New Stack.

Read the whole story
TheRomit
20 days ago
reply
santa clara, CA
Share this story
Delete

Six Gotchas with Running Docker Containers on Hadoop

1 Share

Despite the potential value of containerizing workloads on Hadoop, Cloudera’s Daniel Templeton recommends waiting for Hadoop 3.0 before deploying Docker containers, citing security issues and other caveats.

“I thought of titling this, ‘It’s cool, but you can’t use it.’ There’s a lot of potential here, but until 3.0 — [it’s] not going to solve your problems,” he told those attending ApacheCon North America in Miami last week.

Templeton, who is a software engineer on the YARN development team at Cloudera, delved into the Docker support (download) provided by the Hadoop LinuxContainerExecutor as well as discussed when there might be better alternatives. He stipulated that he was talking about Docker on Hadoop, not Hadoop on Docker, which he called “an entirely different story.”

“I’ve got a Hadoop cluster. I want to execute my workloads in Docker containers,” he explained.

Hadoop’s YARN scheduler supports Docker as an execution engine for submitted applications, but there are a handful of things you should understand before you enter this brave new world of Docker on YARN, he said, explaining:

1. The application owner must exist in the Docker container

Currently, with Docker, when you run a container, you specify a user to run it as. If you specify the UID — not the username — and if the UID doesn’t exist, it will spontaneously create it for you. This remapping won’t work well with large numbers of images, where the user needs to be specified beforehand. Otherwise, you can’t access anything. You can’t access your launch script and you can’t write your logs; therefore it’s broken.

“There is no good way to deal with this. The discussion is YARN-4266. If you have a brilliant idea how to fix this, jump in on it,” he said. The approach taken by YARN-4266 “might not get exactly what you wanted, but it’s the least destructive thing we could think of doing. …This is one I don’t see resolving soon until Docker extends what they let you do,” he said.

From Daniel Templeton’s presentation.

2. Docker containers won’t be independent of the environment they run in

One of the chief benefits of Docker containers is their portability. Guess what? They won’t be very portable in Hadoop. If you want HDFS access, if you need to be able to deserialize your tokens, if you need a framework like MapReduce, if you’re doing Spark — you’ve got to have those binaries or those jars in your image. And versions have to line up.

There is a patch posted on this. The patch allows white-listed volume mounts, and you as an administrator can say, “These directories are allowed to be mounted into Docker containers.” And you can specify for those directories to be mounted when you submit your job. Problem solved as long as administrators pay attention to the fact that it could be running as root in the container, so don’t let them mount anything that could screw it up, he said.

3. Large images may cause failures

There is currently nothing in YARN to do with Docker image caching. When you execute your job, that docker_run will implicitly pull the image from the repo. Spark and MapReduce both have a 10-minute timeout. If you have an image in the network that takes more than 10 minutes to download, your job will fail. If you persistently resubmit, it eventually will land on a node that you’ve already tried it on, and it will run. But that’s not the greatest solution.

YARN 3854 is a first step, not a solution. It lets YARN localize images in the same way it localizes data. In YARN, you can say, “I’m submitting this application, and this is the data, the ancillary libraries — the whatever the heck it — that this job is going to need. Please distribute it to all the nodes where my job will run.” And YARN will do that. The problem is that will not save you from the 10-minute timeout. So there’s more work to do there.

4. There is no real support for secure repos.

Docker stores its credentials for accessing a secure repo in a client_config, which is always set to your .docker/config.json. You have no way from YARN to change that. That means when you’re accessing a secure repo, you’re subject to the .docker/config.json file in your user’s home directory on whatever node manager you land on. That’s probably not what you want. There is a JIRA for that, however, 5428, which will make it configurable.

5. There is only basic support for networks

“When you’re thinking Docker on YARN, you start thinking about Kubernetes, Mesos, that type of thing. Kubernetes gives you this really nice facility for doing network management, right? You submit jobs and you say, ‘This is part of the network and that’s part of the network.’ And networks magically materialize and CNS routing is handled, and the world’s a wonderful place with puppies and unicorns,” he said.

YARN does not offer you that. It does not offer the notion of pods where you can say, “These applications are all part of the same pod. Go run them together and share the network.” There’s no notion of port mapping built in. There’s no real automated management over the network. Instead, you can explicitly create networks in Docker on all your node manager machines, then you can request those networks. But that’s it.

6. There are massive security implications

Some people are paranoid about this, though he says he’s not: You can execute privileged containers. A privileged container in Docker gets to peek into the underlying operating system, access to things like slash-proc and devices. You can turn that off or limit it to a certain set of users, so it is controlled, but you have to be aware of it.

The other side of the coin is you can only do terrible things to the underlying OS if you’re running as root in the container. At this point, YARN provides you no way to specify your user. In the future it likely will. “There are security implications with Docker on Hadoop that you really have to think through.”

Hadoop 3.0

While some Docker fixes are in Hadoop 2.8, they’re not enough to be useful, according to Templeton. Among the 3.0 features not in 2.8:

  • Mounts localized file directories as volumes
  • cgroups support
  • Support for different networking options
  • Documentation

The Hadoop 3.0 release is scheduled by the end of the year, according to release manager Andrew Wang, also a software engineer at Cloudera. It’s undergoing two alphas, and a third alpha is planned before it goes to beta.

Its major feature will be improved Hbase erasure coding, which will provide users with 1.5 times the storage, meaning they can save half the cost of hard disks. This reworking of storage will have a massive impact on users of YARN and MapReduce, Wang said in a separate interview.

The project has been working with major users including Yahoo, Twitter and Microsoft to ensure compatibility with existing systems and enable rolling upgrades without pain, Wang said.

Feature image via Pixabay.

The post Six Gotchas with Running Docker Containers on Hadoop appeared first on The New Stack.

Read the whole story
TheRomit
26 days ago
reply
santa clara, CA
Share this story
Delete

Can faux meat produce meaty profits? Entrepreneurs survey the food frontier

1 Share
Josh Balk and Hampton Creek cookies
Josh Balk, a co-founder of Hampton Creek Foods, grins over a spread of cookies made with Hampton Creek’s vegan cookie dough. (GeekWire Photo / Alan Boyle)

Is there money to be made by going meatless? Substitutes for meat, dairy and eggs have been around for decades, as demonstrated by the success of Seattle-based Field Roast Grain Meat Co., but new technologies may well give what’s now known as “clean meat” a boost.

“I don’t know of any companies that are true innovators in this space that are flailing,” said Chris Kerr, investment manager at New Crop Capital, a D.C.-based venture capital firm that specializes in the food frontier.

Kerr was among the experts speaking at a survey of the marketplace for clean meat – that is, meat products that are essentially grown from cells in a vat rather than animals in a feedlot – as well as for plant-based proteins like Field Roast. Monday’s presentation was organized by the University of Washington’s CoMotion Labs in collaboration with the Good Food Institute, a clean-meat advocacy group.

Clean meat made a splash in 2013 when Dutch researcher Mark Post served up a hamburger built from lab-grown stem cells, at a cost of $330,000 for the burger.

Since then, a number of startups have been working to bring that cost down. Post formed a company called Mosa Meat to commercialize the technology. Other cultured-meat ventures include Memphis Meats in the U.S. and SuperMeat in Israel.

Meanwhile, other ventures are working to make plant-based proteins more palatable for fans of meat, dairy and egg products. Hampton Creek Foods, for example, offers mayonnaise, salad dressings, cookies and cookie dough that should pass muster with the strictest vegans. Other entrants – including Impossible Foods, Beyond Meat, New Wave Foods and Miyoko’s Kitchen – are working on newfangled plant-based versions of burgers, seafood, cheese and butter.

Food frontier panel at CoMotion
An event on the food frontier, presented by UW CoMotion Labs, featured Christie Lagally. a senior scientist at the Good Food Institute; Josh Balk, a co-founder of Hampton Creek Foods; and Amy Webster of the Humane Society of the United States. (GeekWire Photo / Alan Boyle)

One of the motivations for marketing (and eating) meat substitutes is to make a dent in the billions of animals and sea creatures that are killed every year to fuel humanity’s appetite.

Another is a realization that livestock agriculture will be too inefficient to feed the estimated 9.7 billion people who will be living on Earth by 2050. By one measure, it takes 40 calories of energy to produce each calorie of food output from beef.

“We actually see our food system as being kind of a disaster,” Kerr said.

Then there’s the profit angle: The next frontier for clean meat and plant-based protein is to produce products that are trendier and more affordable. That’s what it’ll take to expand the market from those who are committed to a meatless lifestyle or sustainable agriculture to the price-conscious mass market.

“What I don’t think has been made is a super-cheap nugget that can displace chicken,” said Josh Balk, a co-founder of Hampton Creek who is now vice president for farm animal protection at the Humane Society of the United States. “If someone would create that, I guarantee that is going to be a coup for this business.”

Washington state could be well-placed to play a role in the faux meat industry: Eastern Washington ranks among the nation’s biggest producers of pulse crops – dry peas, lentils and chickpeas – which happen to be the readiest sources for plant-based meat substitutes.

“Right now, peas are a pretty hot item,” Kerr said.

Kerr said Western Washington’s ports could provide the channels for sending those frontier foods out to the rest of the world. “I think it’d be great,” he said.

But David Lee, president and founder of Field Roast, said that over the past 20 years, he’s learned a lesson that newer entrants would do well to emulate.

“Our fundamental innovation is, we set out not to imitate animal meat,” Lee said. Instead, the company focused on a process that takes natural ingredients and transforms them into meat substitutes that can stand on their own – for example, smoked apple sage, Field Roast’s top-selling sausage.

“People want to know where their food comes from,” Lee said. “If you bite into a Boca Burger, where’s your field of reference?”

Clean meat has yet to face a true market test. So is the food frontier poised for a bloodless revolution?

“There are two answers to that question,” Lee told GeekWire. “On the one hand, there’s opportunity. But on the other hand, there’s a lot of established companies, there’s only so much room on the shelf, and when you’re the first, second or third out in the market, that’s a good position to be in.

“We’re lucky to have been in that position. … But I think it’s more difficult now for companies coming up. There’s a lot of pressure on the shelf right now.”

Read the whole story
TheRomit
28 days ago
reply
santa clara, CA
Share this story
Delete

Build your own web search service with Bing Custom Search

1 Share

Today at Build 2017, we announced the release of our latest addition to the Microsoft Cognitive Services portfolio – Bing Custom Search. Coming at a time when there is a demand for tailored search experiences, Bing Custom Search is an exciting new development.

Bing Custom Search is a commercial-grade solution that allows you to create a highly-customized web search experience that delivers dramatically better and more relevant results from a targeted web space. Now available as a free trial on the Microsoft Cognitive Services website, additional availability is planned for later this year.

In many ways, web search engines are now the gateways to information. By making it possible for you to create a custom web search service, Bing Custom Search opens up new possibilities for you to find knowledge about the things you deeply care about in many different ways. Our goal is to democratize access to information tailored to your area of interest and focused on a particular subset of the web.

While Bing Web Search API allows you to search over the entire web, Bing Custom Search allows you to select the slices of the web that you want to search over and control the ranking when searching over your targeted web space. You can programmatically retrieve your custom search results with Bing Web Search API , using an additional query parameter.

Build it quickly

 

With a straightforward UI, Bing Custom Search enables you to create your own web search engine without a line of code. Setting-up a web search becomes easy, fast and enjoyable.

The core technology works in three steps: it identifies on-topic sites, applies the Bing ranker and delivers relevant search results while allowing you to adjust the parameters at any time. You can specify the slices of the web to draw from and explore site suggestions to intelligently expand the scope of your search domain. Also, pin the websites that you care about most to the top, which will deliver dramatically better and more relevant search results for your area of interest.

Bing Custom Search Diagram

Ultimately, Bing Custom Search allows you to leverage the power of Bing’s globally operating search backend (i.e., index, ranking and document processing) to build a search that fits your needs.

For example, if you are an enthusiastic bike touring blogger, you might want to have an awesome bike touring search integrated into your blog. Bing Custom Search allows you to build such a targeted search with only few steps.

Bing Custom Search - Bike Tours example

It is very easy to plug the custom search solution into your blog and share it with like-minded people.

Ad free and commercial grade

 

Displaying the results retrieved via Custom Search is totally ad free – no matter how much or how little of the service you use. It empowers businesses of any size, hobbyists or entrepreneurs to design and deploy web search applications for any possible scenario. For example, enthusiasts can plug it into their private websites to create a web search for fellow enthusiasts, and businesses can leverage it to set up a high-coverage web search quickly and affordably.

Bing Custom Search Preview webpage

As a commercial-grade solution, Bing Custom Search empowers you to design and deploy applicable search experiences for unlimited scenarios. Also, you have API access to your search results – giving you the capability to present the results as your customers want to receive them.

Get started

 

To get started with the Bing Custom Search, go to https://customsearch.ai, or Bing Custom Search on Azure to sign up for a trial key.

We are excited to introduce Bing Custom Search to the developer community and are eager to get feedback about how you are using custom search and what you would like to see in the service. The team is steadily working to make our APIs even better with each release, so we want to hear from you.

You can contact the team at bingcustomsearch@microsoft.com.

- The Bing Team

Read the whole story
TheRomit
33 days ago
reply
santa clara, CA
Share this story
Delete
Next Page of Stories