Rod Boothby: “with Joyent, service providers can deploy Clouds in a matter of weeks”

The last visit of our November press tour in the Silicon Valley took place in downtown San Francisco, with Joyent, an innovative company dedicated to Cloud deployments. Joyent is now planning to deploy in Europe, starting with France and the UK. A few important announcements were made at this meeting. We were greeted by Bryan Brown and Rod Boothby, respectively SVP Business Development and VP Global Business Development


Joyent’s mission statement is simple: “the best in class software for cloud operators”. Joyent’s main customers are public cloud operators. The company was founded in 2004 and the cloud offering was launched in 2006. In 2009, Intel invested in Joyent and on November 19, 2010, DELL signed an OEM agreement with Joyent.

“Joyent isn’t the oldest, but one of the oldest Cloud operators” Brown added.

Joyent thinks it “is the only software company to build a complete Cloud stack”. Other companies have software stacks and others operate Clouds, whereas “we do both” Boothby said, “and we think that our only competitor is Microsoft”.

offering providers the “most profitable Cloud”

Joyent’s goal is simple, they want to “offer service providers, the most profitable Cloud”. VMWare’s approach to virtualise servers, but Joyent’s solution is a complete data centre virtualisation offer. Here are somoe of Joyent’s differentiators.

  • operating system: a team of former SUN developers joined Joyent. That means that eveything can be optimised in the Cloud” Boothby said,
  • broad range of models can be offered to clients and more breadth of performance and better scalability,
  • the file system that Joyent is using is based on ZFS and it allows them to cache (mutilple tier cache approach mixing RAM and SSDs) and as a result is can run Windows a lot faster than anyone else.

The Joyent partner list includes players like load balancing company Zeus, New relic and Cohesion, Intel and Arista. Joyent is using Arista to manage their switching and this is making it possible to better control the cloud.

in real life: two striking examples

  • Here is a proven example with Gilt Groupe who – thanks to Joyent – is spending less than 1% of its revenue on infrastructure, which is 70% better than the average spend on that kind of things,
  • LinkedIn: uses Joyent to deploy all their ancillary project ( for instance, is running on Joyent). What it means is that companies like LinkedIN can launch and only scale up if they are successful. Note: LinkedIn started in 2003, before Joyent launched and therefore, its main service is running off a legacy infrastructure. Bumper Sticker from LinkedIn is working off Joyent servers too,
  • a list of clients and business cases is available from Joyent’s web site.

the Joyent PAAS offering: node.js

The Platform as a service offering of Joyent’s is characterised by HTML5, CSS and Javascript. What is revolutionary is the non-io blocking server side javascript which is making it possible to run millions of users and it allows 784,000 requests per second (vs. approx. 40,000 requests per second for Google), “which is insane”, Boothby exclaimed.

What Joyent is claiming is that node.js is fast and light enough to support the “Internet of Things”

Becoming a public cloud service provider

If a service provider wanted to set up a public cloud for its clients, they would be able to do this in a matter of weeks, Boothby explained. Servers can be Dell but Joyent is Intel-based, so that other vendors can be chosen.

There are no limits to the number of virtual machines it can handle, and single sign on is included, and can be integrated with whatever legacy customer and billing system you have, Boothby explained.

Why bother? because there is more revenue per machine. On a JOyent cloud, one can generate 4 to 5 times more revenue per machine Rod Boothby explained, and this is based on our experience (they support over 30,000 customers, thousands of applications and billions of page views), and Joyent is confident that the only Cloud that will stay in such a competitive market is the one that is the most profitable.

This kind of turnkey approach means services too! This is why Joyent partnered with Dell services (formerly Perot systems). The Dell partnership will start immediately in the US but there are plans to expand in Europe and Asia, and “very strong in Asia”. “We have a long-standing relationship with Dell” explained Nema Badley, Director of Marketing at Joyent, precising that Joyent was running another press meeting at the same time in San Francisco.

PAAS and Cloud computing

In Joyent’s mind, there is a difference between PAAS (platform as a service) and Cloud computing, as PAAS is part of Cloud computing but Cloud goes beyond platforms. In the following video, I have asked our Joyent hosts to expatiate on this differentiation.

Storsimple’s Ursheet Parikh battles against “cloud washing” and redefines cloud computing with Hybrid Cloud strategy

note: this piece was originally written for the Orange Business Live blog

On November 15, after Zscaler, we visited Storsimple, provider of Hybrid Cloud technology. Ursheet Parikh is one of the 2 co-founders and the CEO of Storsimple, a promising start-up operating in the storage domain, based in Santa Clara, Calif.

Storsimple started in May 2009, at the bottom of the depression, after exceptional reception from the VCs. Prior to this, Ursheet was working at Cisco and before Cisco.  Storsimple is pioneering In-bound marketing, as opposed to top-down marketing, because they want to be acknowledged by their peers above eveything.  All engineers in the company have a 20+ years of experience in the storage industry, and having 15-20 of such people is according to Storsimple’s CEO a recipe for success.

Their approach has been very conservative, starting with a beta with a few customers and they are now planning to go live by the beginning of 2011. They have the financial backing to do this and they make sure that all the customers go through the beta test in a very thorough fashion. There are no small vendors left in storage and therefore, Storsimple is walking on eggs and thinks that they should build a business and not just a product, and they don’t want the start-up to finish when the initial product is delivered.

a cruisade against “cloudwashing”

Ursheet’s favourite word of the year is “cloudwashing”. He says that almost anyone is trying to sell stuff labelled as “cloud computing”, and “cloud washing” happens when vendors are selling things labelled as cloud computing when their solutions have nothing to do with it; yet, cloud computing is really important if it’s not trifled with.  For most CIOs, their budgets is a subset of revenue and IT can only do what its budget can achieve and this is why cloud computing is so appealing to IT managers. Enterprises need to embrace cloud computing just to stay in sync with competition. So much so that Urshreet thinks that the leaders in that space in a few years won’t be the equipment manufacturers but service providers like Orange Business Services. “A new set of players delivering IT as a service are cropping up and they will come to the fore”, Ursheet said.

different levels of cloud computing strategies … plus a new one

CXOs have different options in terms of cloud computing strategy depending on their requirement:

  • SAAS, for which apps move to the cloud
  • PAAS (platform as a service), for which apps have to be rewritten
  • Public cloud compute with vendors like Amazon EC2
  • Public storage with Amazon S3 for instance
  • Private cloud technology, with vendors like EMC Atmos

all of these strategies have issues (re. slide above) and Storsimple is adding an option to that list with what they call their Hybrid cloud strategy, which in Ursheet’s mind is addressing the full spectrum of the requirement, from core services and security to the end-user. Yet, Storsimple is not working against storage nor cloud service providers, Ursheet Parikh added, on the contrary;  it is aimed at providing a front-end interface to these vendors’ hardware and cloud solutions.

the problem: data explosion in the data life cycle

Collaboration has become a must if one wants to avoid the explosion of the amount of content and the amount of copies of files that exist. De-duplication is helping but it doesn’t necessarily solve the issue Ursheet said: even with de-duplication, companies still back up vast amounts of data and move them around across servers. This is what is called “content communism” i.e. that no one wants to know what is there and no one wants to clear the data. What Storsimple is doing is to analyse the working set, put that in local storage and use central storage for the rest. Now, the challenge with cloud computing usage are integration, performance and security.

the solution: hybrid storage by Storsimple

with hybrid storage in the Microsoft enterprise environment as seen by Storsimple can be summed up in a few points:

  1. instant provisioning is made available,
  2. the storage disk is made available straight from the user’s desktop, as a separate network iSCSI hard drive but instead of being in the private cloud of the client, it is stored centrally,
  3. Storage performance is managed centrally by Storsimple through tiered storage (SSD, SAS and Cloud), weighted storage layout and real-time de-duplication.

One of the most striking function offered by Storsimple is that which goes by the name of “Cloud clones”. Storsimple Cloud clones are snapshots of current data changes and are stored in the cloud, as if some kind of mirroring, but not in real time. This is making backups and disaster recovery (DR) far easier and less complex according to Storsimple’s CEO. Snapshots can be done every 4 hours (whatever frequency clients want) and data is saved centrally on the cloud every now and then. This operation is transparent for users and is not wearing on the performance of each individual desktop. Clones can be launched overnight in case users or enterprises prefer to do so. In terms of Cloud acceleration, there is a home-developed algorithm to optimise local storage vs. central storage and include compression so as not to put the burden on the user.

Pricing is done only on hardware capacity (ingested content, so you don’t pay for the compressed footprint but for the uncompressed one), all the functionality is included. The end result is an average 80-90% cost reduction for clients.


Ursheet concluded by saying that this solution is very well suited to Sharepoint too, mainly when it comes to solving the huge storage issues related to Microsoft’s online file sharing service. Archival online is also possible, therefore putting an end to this “content communism” issue highlighted above, thanks to compression and remote data retrieval.

Zscaler: “we want to be the of Internet security!”

this piece was originally written for the Orange Business Live blog

On November 15, I was invited with a group of journalist to a press tour in the Silicon Valley (a sequel to our June 2010 press tour in the Valley). Our first presentation took place in Sunnyvale, Calif. at the main office of Zscaler, a ground-breaking cloud security provider, which is also a partner of Orange Business Services.

The landscape has evolved

The security landscape has changed dramatically over the past decade. Whereas most security threats (apart from social engineering) used to come from outside devices like floppy disks or the more recent usb keys, the vast majority of threats are now directly coming from the Internet. This has forced enterprises to equip themselves with a flurry of protection devices and software which have, over the years, generated  staggering complexity; and now, this complexity is increasingly getting out of hand. Traffic and policy management have become so important that the very dissemination of such rules and policies are a major pain-point for CIOs, not to mention the fact that simple techniques such as url filtering for instance are not always proving very effective. Besides, traditional security measures generate humongous log files. Something had to be done, mostly in that age of cloud computing, in which all clients are now seeking to rent their IT instead of buying it. Zscaler’s approach is therefore not to compete in the same market as traditional players, but to redefine the game plan by providing security in the cloud.

The Zscaler blog

The company is security savvy and dedicated to the Web community. To that intent Zscaler have developed a R&D blog available at the blog is packed with information about Web security and you are mostly advised to download their own blacksheep firefox plug-in, a security device which will protect your device from the firesheep wifi sniffing plug-in so as to avoid that your facebook details be stolen by malicious people.

The Future?

What will be the future for Zscaler? Will the company sell itself to a bigger company? Zscaler is getting so many calls from VCs throughout the week that it would be an option if its CEO didn’t think that this isn’t one of his objectives. Zscaler is now performing so well that they think they are in a position “to build the of Internet Security”.

So far, the security market is a $1.2bn market dominated by a few players and then there are small players in the background. But the market is growing 30% year on year and Zscaler’s CEO think that it is still new and that “noone had ever done it properly so far”. This is why Zscaler thinks it can be a major player in that market by disrupting it and changing the ball game.

Below is a transcript of the presentation as it was delivered on November 15 in Sunnyvale, Calif. (the presentation was delivered by Shrey Bhatia, Zscaler’s head of worldwide field management and its CEO, Jay Chaudhry)

overview of Zscaler and its products

  • largest standalone cloud security company protecting 800 companies in 140 countries, millions of users
  • manage a cloud deployed across 40+ data centres globally
  • r&d over 3 continents and own 30 patents on cloud security technologies
  • with offices in 15 countries, US, Europe and APA
  • positioned as the “most visionary” company by Gartner
  • growing revenue by 50%
  • clients include LVMH, Allianz, VW, Coca Cola, Wipro etc.
  • “anyone who uses the Internet is a potential client of ours”
  • in France, there are already  many clients (see slide)already, and Orange Business Services is a partner of Zscaler’s (some of the French clients quoted on that slide were closed with Orange Business Services)

market overview: examples of how security is evolving on the Internet

  • Web (http protocol) has become main attack vector
  • over 80% of threats coming from the Web from 5% in 2000
  • It’s no longer USB disks or floppy disks
  • 85% of all traffic coming in and out of all companies (all types, small or large) is Web-based, this is why threats are coming from there too

Challenges facing the world in terms of Internet security

  • all content is active, live with Flash and Java, and this is what is making security threats more challenging
  • filtering: most companies want to control where employees are going. But the old list-based url is not working anymore. Facebook, wikipedia have evolving urls and it’s changing all the time. Besides blocking Facebook is an issue if the same company is launching multi million dollar advertising campaigns on Facebook!
  • Web 1.0 sites were read-only whereas Web 2.0 sites are now a cause for iinformation leaks: webmail, blogs, IM …
  • bandwidth is a real issue. Video is 20 times more exacting than text and companies are very concerned about the amount of bandwidth which is being used by video
  • Road warriers are new challenge too: people go to and so many online applications that the Web has become so critical. So it is of paramount importance to protect the road warriers
  • the last and one of the biggest challenge is cost and complexity: CEOs impose CIOs to do 20% more with a flat or even decreased budget

What Zscaler does and how they do it

  • Zscaler sits between the user and the Internet anywhere in the world, whereever they are, and whatever device they use. User goes to the Zscaler cloud, and Zscaler is the trusted third party and is termiinating the transaction to the Internet.
  • This is done with no hardware, no software, no plug-in, nothing!
  • This is why very international companies choose Zscaler.
  • How is it done?
    • in the browser, one has proxy settings, and one has to change the proxy setting, it’s all you have to do and it can be done remotely
    • can be done at device or office level, from the firewall or router
    • Zscaler’s cloud is the most global cloud in the industry
    • The “policies” are kept in the cloud and are moved around as companies and users are moving by moving the policies to the closest data centre. This is what is called “shadow policies”
    • Latency is important, and this is why data centres have to be as close to users as possible
    • In the past 6-7 years, companies have deployed MPLS networks: the biggest benefit is that bandwidth is divided by 2 and that latency is also improved. But network topologies are changing slowly because enterprises have spent a lot of time putting all their network topology together and they are naturally reluctant to throw everything away now. Hence it’s best to let them be more comfident with the service before they change their network infrastructure and re-engineer it.
  • cost-effectiveness
    • for all French customers, Zscaler is managing tens of thousands of users with just two boxes, and this is a lot easier and more cost effective than managing the complexity of myriad CPE’s (Customer Premises Equipment)
  • Will it slow things down?
    • Traditional security devices are firewall devices which weren’t designed to scale
    • Zscaler had to build new boxes which are very scalable
    • Standard costs to open 1 data centre is $1m, whereas Zscaler is able to open one for a fraction of that, with 2 boxes and can serve half a million users for that price
    • nanolog technology is a special technology which compresses logs and speeds up transactions, it has been developed by Zscaler (traditional logs for an average large company are going to generate 50-100GB of data every day. none of that information can be searched or used)
  • If everything is centralised how do minimise threats?
    • the goal of a cracker is to get to the user’s machine an monetise information or turn it into a bot
    • Zscaler is just a conduit, hence it’s just a bridge, and there is not much value in accessing Zscaler’s boxes
    • Zscaler spends an awful lot of time and R&D to protect their servers and make the service safe

Zscaler services

  • 4 types of Services come on top of that infrastructure:
    • Web security: Antivirus and Advanced threats browser contro, E-mail security
    • Web control: url filtering, web 2.0, limiting bandwidth (i.e. ensuring that YouTube for instance will not take up more than 30% of the total bandwidth)
    • Web DLP (data leaks/loss prevention)
    • Web analytics


  • save money and time, best security and policy management, real time reporting, easy to deploy data loss protection mechanism, near-zero latency (high performance proxy and breadth of cloud), integrated email & web
  • What Zscaler isn’t: Zscaler isn’t playing in the Wan optimisation space

Arista brings cloud networking in the data centre and increases performance 5 times for 10 times less power

On 3 June 2010 we had the opportunity to meet with some very important figures of the Silicon Valley in Menlo Park California. We had indeed a meeting with the people from Arista, and namely Jayshree Ullal a former Cisco top executive who is leading the new company.

The best

If we add to describe ourselves, Ullal says, we could say that “Arista means ‘the best’ in Greek!”. This is raisin the bar rather high but Ullal and team are seasoned and well respected specialists in the area, and therefore one understands quickly that this is not yet another start-up. The foundation of the company is based on the principle that cloud networking is different from enterprise networking. The company was founded in 2004 and took 4 years to develop the software, in essence an extensive operating system called IOS which is, according to its authors “the first time purpose-built software developer data centres”.

Swift innovators

The closest competitor of EOS is NSOS from Cisco which is fact not really similar. Products started shipping on the second half of 2008; throughout the following 6 quarters, the company has amassed a vast number of customers. Arista representatives are saying that “[they] are adding a customer a day except weekends!” At the moment, the customer base is made of 300 customers who “came in even before a marketing plan was in place”.

Not a “rocky start-up”

This, Arista, is above all an experienced start-up versus a “rocky start-up” to put it in the words of Jayshree Ullal. Here, we mean serious business. It already has over 100 employees and most of its customers are coming to them rather than the other way round. Arista is basing its strategy on the fact that it can deliver innovation more swiftly than behemoths like Cisco, HP and the like. This is what made it possible for the  company to receive Interop awards in 2010. “Arista is only focusing on the data centre, and is not doing enterprise networking” adds Ullal. This is why she pursues, they are “ahead of anyone else”.

In a nutshell, Arista is proposing better performance, (up to 5 times better) for 1/10th of the power and half of the footprint in the data centre. In essence, the company is proposing to improve data centres not just from a performance point of view but also from the Green-IT point of view and this is their main selling point.

Interview of Jayshree Ullal

In the following interview, Jayshree Ullal describes what the company is doing, its main selling points and its particular track record with regard to the investment finance industry for which she will give examples and facts and figures

Blade Network Technologies: “we do business with people for whom, when the network goes down, it will cost millions of dollars!”

the network as a business enabler

On June 3rd, 2010, at the end of our press trip in the Silicon Valley, we have had the opportunity to meet with Vikram Mehta, President and CEO of Blade Network Technologies (BNT), a four year-old company dedicated to “providing the interconnect fabric” behind cloud computing to put it in the words of our host who welcomed us at BNT’s headquarters in Santa Clara, Ca. What is behind this concept of “interconnect fabric” is the provision of intelligent networking and storage application connectivity for virtualised data centres.

what is keeping CIOs awake at night?

What I particularly liked about Vikram’s presentation was his introduction in which he described very clearly the 7 painpoints which are keeping CIOs awake at night.

  • first and foremost, scalability – the almost obligatory buzzword in the infrastructure industry and in the Bay area in particular – is of the essence. As businesses grows rapidly and business owners rely extensively on IT to support their needs, the requirement for that IT infrastructure to grow with the business is becoming an imperative,
  • as data centres have to grow exponentially, density is one of the most critical issues that IT managers have to face. It’s a matter of packing as much computing and storage power as possible in as little space as possible. Yet, it’s not just an issue of piling up more storage bays and blades, it’s also a matter of providing the critical connectivity between these various elements (computing, storage and I/O). All of these leading to mind-boggling issues in the data centre,
  • thirdly, a faster and larger deployment of such infrastructure is a towering issue. Imagine a large investment bank which was used to deploy 5,000 new servers each year. That very same bank – because of the increasing importance of automatic trading – was led to deploy 100,000 servers last year! This is what happened to Morgan Stanley and BNT helped the Bank overcome that issue and even won an award in that process,
  • fourthly, maximising the utilisation of that infrastructure is critical too. Not all servers are used in the same way. Some sort of yield management (i.e. the method pioneered by airlines in the 1980s in order to maximise the number of passengers per aircraft) is necessary in order to optimise the usage of deployed resources,
  • the fifth problem that CIOs are facing in this mass computing age is security, a topic often tackled on our own blogs. As more business is pushed online, namely in banking and investment banking as seen in the above example, more security is needed because hackers will always focus on a) where the information is widely available online b) where big money flows,
  • next on CIOs’ agenda is the need to be able to mine such data efficiently across huge databases. In essence, if more data is stored online, real-time drill down in humongous data bases becomes a critical issue, as seen in detail with our visit to Clustrix in San Francisco,
  • last but not least comes the total cost of ownership (TCO) issue, which is obviously and directly linked to this exponential growth in server and capacity deployment in the data centre.

>>> read on at