03/13/13

of geeks and men … a case of “us and them”

I spotted that picture (or rather, my wife did, let’s be honest) posted by Physicisttv on their Facebook page last night and I couldn’t help share it with you on this blog. The fact is we could change the caption for almost any kind of job that you/others/we (change pronoun) don’t – quite – understand.

howuserseeprogrammersandviceversa

Very often I have seen “business” people label their digital experts “geeks” while meaning “martians”. Even myself (roarrrrrring laughter!). Conversely, programmers see users as dummies (remember the intelligence chart in the Dilbert Principle in which Dilbert descibes end users as more stupid that hammers  and “silly putty”?!)

clip_image002

 [in Dilbert Principle]

After all, I could well place an accountant in that chair and I’d see him as a martian because I never understood what these guys were up to. A case of “us and them”… a bit like what’s happening with helpdesks …

Dilbert.com

09/18/12

5 major trends for the future of IT and the Web – #blogbus

imageThe Orange Blogger bus tour – of which I am the organiser on behalf of Orange of which I am the Director of Internet and social media – was stopping by San Francisco today and the whole day was hosted by Orange Silicon Valley

Georges Nahon delivered a very inspiring keynote today before our panel of bloggers in which he shared his vision with regard to what is happening in IT in general, and in the Valley in particular. I will begin my account of Georges’s visionary presentation by detailing his conclusions. As I always do, I have taken detailed notes of the pitch and they are made available at the end of this piece. If there is one thing that should be remembered from that pitch is that the Web is everywhere and in everything that will be happening in the future. Something which established players don’t like according to the Head of Orange Silicon Valley. However, Nahon insisted on the fact that it won’t be the same Internet we used to know.

Facebook will be “Yahooed!”

“Social” has been going through a rough patch over the Summer, with the now infamous Facebook IPO, dubbed “IPOcalypse”, IPO meaning “It’s Probably Overpriced” Nahon said facetiously. Yet, Europeans are wrong when they interpret these issues as the end of social media, Georges Nahon said in essence. Social is here to stay, and beyond, it will change everything which takes place on the Web, even though Facebook itself will probably be “Yahooed!” Georges added.

But the worrying thing I got from his pitch is that, according to his analysis, next to the World Wide Web that we all know, an increasing number of companies, including Amazon, are creating a “non-searchable adjacent Web” which sounds very much like the end of the Web as Chris Anderson announced in Wired a few years ago. I think Georges is right indeed, there is a growing concern that Net neutrality is being sacrificed for the sake of user experience. Time will tell, but there are indeed worrying signs.

image

Georges Nahon, head of Orange Silicon Valley, on the first day of the blogger bus tour

Here is how I summed up Georges’s 5 trends for the future of IT:

  1. Tech is all about mobile: “Twitter is a mobile-first company” and thriving he said, “Facebook isn’t and is suffering”. 10% of Internet traffic is made of mobile traffic. Yet, 25% of US users are using the Web from mobile only, but in Egypt, this number soars up to 70%, and India is close to 60%! And 68% place their mobile next to their bed while sleeping at night.
  2. The default is now social: and social meets mobile (over 50% of smartphones connect to Facebook). Social graph (Facebook), interest graph (Twitter) and influence graph (Klout) are the new frontiers of the Web and “they are here to stay … for a long time” Nahon said. For many, Facebook is the new web (“find us on Facebook, follow us on Twitter). What is the future of search? it is social and both Google and Microsoft are working on it… “and Facebook search is coming fast” Nahon added.
  3. Another Web: At the same time, traditional web development is slowing down, and Apple, Amazon, Facebook and Mobile will continue develop their “non-searchable adjacent webs” as Nahon called it.
  4. The Cloud as a new frontier: “The new guys are Amazon, Zynga, Rackspace and even people like Google were taken by surprise” Nahon said. But there are even newer guys you may never heard of such as Bluejeans, Alfresco, Joyent and many many more. Explosive data growth is also forcing companies to develop solutions for data reduction. And “the next big thing isn’t Software, it’s data” Nahon concluded on that subject.
  5. All video will be on the Net: most players in that field are coming from the Internet world, not the media world. “We think that the future of TV is to be streamed” Nahon said. There is more innovation than ever before in that area he said. Nahon added though that the concept of app-centric TV on smart TVs wasn’t entirely convincing. Time Warner see their future in apps but another trend is Social TV (described by Nahon as “a descendant of interactive TV which never worked”. 85% of tablet owners use their device while watching TV he said. What are they doing? Social websites, Zynga, Search, Craigslits (an old web survivor!) according to Nielsen.

the future of the World Wide Web

So, what is the future of the Web? Georges Nahon highlighted 10 trends in that area too:

  1. the web is becoming data centric
  2. apps will rule consumer and entreprise innovations and html5 will infiltrate apps and web services
  3. non searchable adjacent webs will continue to develop and the web will be fragmented and site-less (mobile, apps)
  4. the web of sites is dead and Facebook like buttons are the new hyper links
  5. Real-time multi-user game cloud platforms will influence enterprise cloud technologies: the main issue will be “latency” ‘as already explained on that blog)
  6. 4G/LTE (which we all were using to day via local mifi devives) will trigger innovation
  7. mobile payment will kick off from 2015
  8. all video will be on the web
  9. Enterprise IT will shift to the cloud.
  10. Facebook will rule the web during the next 2 years and Google will be in catch-up mode and within 3 years they will be “Yahooed!” Nahon said
  11. Amazon will continue to diversify and will create more online commerce/entertainment clouds and mobile devices (tablets/phones). “Amazon is belittled in Europe” Nahon added, “and it should be considered as a major player, for Bezos is the new Steve Jobs”.

Started as an R&D organisation and evolved towards what they are today (scouting organisation). 60 people, 40 of  which are in a position to file patents and they file 20 per annum. Often, it’s about reviewing the strategy. Statement from Prussian general “no plan survives contact with the enemy” e.g. 5 years ago, no one had seen the iPhone coming. Even analysts. An none of these people has seen Apple becoming a major player in the Telecom industry => be prepared for the unexpected. There were times in which you telcos could go to the ITU organisation and get things sorted but this isn’t the case anymore.

Essentially Orange wants to get prepared for the future. One of the key elements for Silicon Valley is capital investment. In Bay Area only, venture investments represent $3.2 bn 46% of total investments in the USA (San Jose chronicle on Q2 results). Texas only represents $ 179 m (3%) despite the huge tech firms in that state. The core subjects is ICT and media but not only.

The software industry in Q2 of this year received the highest level of funding. (34 out of 39% other source) $2.37 bn i.e. 32% of the total.

Market capitalisation: Apple + Cisco +Oracle +Google +Intel have a total of $ 1,261.82 bn (IBM is only $236b or FTE $37b). What this hides is the myriad of small companies which help these companies become what they are.

Continue reading

03/27/12

Engine Yard’s Dillon: “a private cloud is an oxymoron”

eye-largeAs I was in San Francisco today, I managed to squeeze into a meeting with some of my former partners of the IT press tour organised by my friend Philippe Nicolas. Today we visited Engine Yard, a new player at the forefront of the implementation of platforms as a service (PaaS, i.e. the infrastructure side of cloud computing) for start-ups and Fortune 500 companies. Engine Yard’s CEO, John Dillon (our photo), shared his thoughts with us on the future of cloud computing with a panel of international journalists.

“The most important change in IT since the invention of the PC!”

John Dillon started his visionary presentation with a very straightforward statement: image“Cloud’s the most important change to the way we do computing since the invention of the PC” which address the points made by many of the detractors of cloud technology in the past few years. “All revolutions in IT start with listening to the user and what works is what starts small, in a trial and error sort of way, and then deciding what really works” he went on. “The cloud is scary is disruptive” he said but “it has the possibility to change the landscape for ever, and we are part of that change”

Describing himself lucky, Dillon said they had built a marketplace for the cloud and that they have two kinds of customers:

  • on the one hand, 2.0 start-ups who are looking to be the next Facebook. Dillon went on saying Engine Yard has 2,500 customers in 58 countries although they barely leave San Francisco. They can help our clients with the technology, but not with their business models.
  • Engine Yard’s other clients are Fortune 500 customers but it’s not about core IT, it’s about experimentation and innovation: and it’s working.

“We are very early with the cloud” John Dillon added, but “Corp IT are still trying to figure out what it means but most of the innovation is happening at the fringe, with systems that couldn’t be done 5-10 years ago” and “if it doesn’t work you can throw it away because it doesn’t cost much money” he commented. Most business executives who want to innovate turn to IT and that’s because of  the cloud. the impact will be profound and substantial. The pattern that the cloud is following is similar to past successful innovations I have witnessed in the past. “Big IT shops are nervous about the future” he said provocatively and with passion, “but the change will come and we’ll have to be smart and not tear up the current systems that work”. CIOs beware, “the tsunami is coming” he even added. Not surprising they are reluctant to jump on the bandwagon …

evolution or revolution

It is often asked whether cloud computing is an evolution or a revolution. Dillon’s answer is loud and clear: “If it’s evolution and it’s very fast, sometimes it means that things get broken. The consumerisation of the cloud is happening everywhere and some will see it as revolution, others as evolution. Some IT executives are very positive and embrace the change rather than fight it. A whole bunch of customers 1 mile from here are doing very crazy things [by that he meant Silicon Valley start-ups] but “most of the business done with cloud will eventually come from big businesses”.

IT departments are not liked

But there is one think anyone who has worked more than one week for a large company knows, and that’s the fact that “everybody hates the IT department” and Engine Yard’s CEO thinks that this is also the reason behind this (r)evolution: “users are fed up of asking for new things and IT departments answering either that you can’t have it or that you will have it next year” he added.

To him, a good IT department should say: “we should do something about it and use the new tools”. “A powerful IT department is one that is at the service of their users and shift more investment into innovation from 20% to 30% or even more” he concluded.

A statement no IT user would disagree with.

image

a private cloud is an oxymoron

But the other problem with cloud computing is the “old sheep in new clothes” syndrome, which we described lately in an interview with a Sugar CRM executive.

“If you a re building a private cloud, you are just shifting the budget from one department to another and it’s only a way for the IT department to survive”. he bluntly described.

However, Dillon is no IT department hater, he is merely trying to wake them up to innovation. “We are planning to become a very strong partner of IT departments in large enterprises” he said, “but they won’t buy anything now, it’s too soon; within 5 years from now, they will!”

The cloud is a paradigm shift

“When there’s a paradigm shift, legacy start denying it, then they try to highjack the idea (stating things like ‘the cloud is just like it used to look’) and eventually, changes will come”. “Some big IT vendors will make the transition, but the sales model familiar to big IT vendors in which you sell very expensive proprietary products, very complex and with add-on prices so high that the sales person can buy a new BMW will go!

This is the same thing that Dillon experienced whilst he was working with Salesforce. “10 years ago he said, CIOs wouldn’t want to talk to us, now they all do!” Having said that, cloud computing will not solve integration issues, and this is where IT departments will have an important role to play.

[photos, cc, 2012 by Yann Gourvennec http://bit.ly/picasayann]

02/29/12

Kaspersky: “we will soon witness cybercrime explosion in the mobile world” #MWC #MWC12

This post was originally written for the Orange Live Blog, reporting live from Mobile World Congress in Barcelona

On Tuesday February 27th, I had the opportunity to attend a Press conference organised by Kaspersky security. Eugene Kaspersky himself was present for this big announcement in mobile security. The firm is already widely known for its PC protecting suite and today it made a few important announcements related to its release of parental control suites for mobile devices as well as an advanced protection suite for Android devices. In a flamboyant presentation by one of the Press’s preferred showmen of the software industry, the Russian expert and businessman highlighted the risks that mobile users are facing in the near future. Yet, there has also been criticisms in the industry for scare mongering on the part of security software editors …

image

Cybercrime will soon be too big to be ignored

15 years ago, Personal computer users didn’t have a clue about cybercrime. Viruses didn’t exist, or were in limited supply, and when they did exist they tended to be rather harmless. Similarly, as of today, many users are still wondering whether cybercrime is real or not when it comes to mobility. Nowadays, PC users aren’t questioning that “malicious software” (aka malware) exists nor that it is a real threat. We all know it’s there and that having a proper antivirus installed and regularly updated is a must-have.  Such was Eugene Kaspersky’s introduction, meaning to announce that cybercrime is just about to soar in the mobile industry.

”IT will split into 2 environments : Android for the home environment and Business with the Windows environment” Kaspersky announced, even though Windows and Nokia my prove him wrong soon as it happens; but we understand what he is hinting at: open environments like Android are easier to pry into and are the prime targets for cyber criminals. “No safe zone will subsist” he added.

Going back into the history of computing, viruses like Chernobyl in 1998 were  so fierce that they made infected machines unusable. The virus would go and reprogram the BIOS (the basic software below Windows which makes your machine work) and damage it beyond repair. This virus and others like Melissa or “I love you” changed people’s minds about cybercrime for ever Kaspersky rightfully remarked.

Cybercrime is moving into mobiles

image

The Russian expert’s theory is that mobility is going the same route as computers a few years ago because, in his mind, “there are fundamentally no differences between computers and mobile devices”.

“The number of computer threats has reached a plateau” he went on, so there are few or no new players in the PC cybercrime space or otherwise, they would  need to be extremely professional. The computer crime scene is therefore mature enough and there are also other non malware related scams which work well in that space (SPAM, phishing, pharming being the most frequent ones). “Online banking only started in 2001-2002 and this is the reason why crime soared too after those days”, now that mobile equipment is booming (in 2012 there will be more than 484 million smartphones worldwide) “we will witness mobile ‘malware’ explosion” Kaspersky warned while showing us a chart (see picture above) with some very worrying numbers.

“75% of malware is targeted at Android”

image“Cybercrime in the mobile industry started as soon as 2005” he added, and Android is now becoming the dominant mobile malware platform. (sign of the times, Android went through 1199 modifications in Dec 2011 because of security threats). Eugene Kaspersky said that he was expecting this to happen and he is now “sure that the trend will follow that of computers in 2000 and beyond. “This is bad news for smartphone manufacturer who will need to add extra processing power to cater for security” he said and added facetiously “this isn’t my fault!”. As the above chart shows, things started to get very bad in 2010 and mostly in 2011. And it’s not just mobile devices and tablets he concluded but all connected devices such as TV screens namely.

HOW TO PROTECT YOUR DEVICE

But it wouldn’t be right to scare all mobile and tablet users without giving them good and straightforward advice with regard to the protection of their devices. And apart from the security suites sold by Kaspersky (and its competitors), common sense is a good method for keeping your mobile data out of harm’s way. Here are Kaspersky’s recommendations:

  1. lock your mobile screen
  2. use security software (of course, you would expect that coming from a security software editor)
  3. back up your mobile data
  4. use encryption whenever possible
  5. beware of what you install and don’t click on dodgy websites
  6. do not jailbreak your device as you would enhance the capability of malicious software to damage the core of your mobile device
  7. do not connect to untrusted Wi-Fi points
  8. do not skip updates of your OS when they are available
  9. do not assume that your mobile device is safer than your PC

2012 predictions

Kaspersky and his teams fortunately do not predict that a mobile IT apocalypse will take place in 2012 but they are pretty certain that Android will be the main target for massive attacks and that 2012 will see the rise of the first massive worms for Android as well as ‘malware’ in official market applications.

scareware for charlatans”?

A critique of software security companies’ approach to malware protection on mobile devices (not just Kaspersky’s) was fuelled last November by Chris DiBona, leader of open-source software at Google. Kaspersky’s Denis Maslennikov dismissed DiBona’s claims by saying that Google had launched their own anti virus solution soon after. A complete thread of the Kaspersky/Google story is available at this url. It’s a bit early to tell who is right and who is wrong however even though there may be a bit of truth on both sides. Time will tell.

About Kaspersky security

The Russian federation is known for its high concentration of online crime perpetrators but also for being the home of some of the world’s most revered security experts. Kaspersky, named after its founder and CEO Eugene Kaspersky, a math lover who used his skills to make surfing safer is one of the leaders in that space. You can find details about their offerings at Kaspersky.com

11/23/11

Welcome to the API jungle – part II – what Brands should do

Welcome to the API jungle or why developers must learn how to find, select, integrate APIs and contribute to their improvement and evolution (part II)

by Martin Duval, CEO, Bluenove

imageThe API landscape is extremely dynamic. The following 2 diagrams taken from ProgrammableWeb describe the most common APIs which are used in order to build mash-ups. They show the dominance of major historic players (check the “see all time” diagram) but also the more recent rise of new players such as Twilio in the recent past (last 14 days), as well as the convergence of Cloud Computing and Telecom.

There is yet more evidence of imagethe emergence of this new ecosystem; it is indeed interesting to notice the emergence of new players offering the technical support to manage API infrastructures such as the new Application Enablement Services Business Unit from Alcatel-Lucent (the one that acquired ProgrammableWeb in 2010), Mashery, Apigee, Aepona or 3Scale.

But let’s come back to the relationship between Apps and APIs.

After all, aren’t applications mere channels? To support their promotion, we can reasonably bet that Brands will also create APIs in the future, with probably a more obvious way to demonstrate Return On Investment through the number of innovative apps created by third party creative developers than through the number of downloads KPI[7] of their own app. Brands could therefore propose APIs in order to extend the reach of their products and services.

Here are some suggestions for a few popular Brands[8].

  • Nike could create a “Just Size It” API that gives the perfect shoe size from the photo of your feet,
  • Evian could create a hydration API that calculates the quantity of water a person needs to drink daily and reminds her when rehydration is needed,
  • Netflix has proposed an API to tap into its customers’ creative capabilities, and even organized a contest [9] so as to crowdsource ideas leading to the improvement of the algorithms of its movie recommendation engine,
  • French off-licence chain Nicolas could create an API that allows its customers to find and leave recommendations about the wine they buy.

How could these companies support the use of their APIs, and therefore the promotion of their Brand? This would be done by the developers who would make sure to make APIs accessible by the end users on different interfaces, and who would find ways to remunerate themselves through the proposition of new business models.

Of course Brands can still develop some specific applications themselves, but the decision to propose an Open API will offer an unparalleled way to boost exponentially the reach of their promotion.

A lot of marketing managers are sometimes the victims of the ‘gadget syndrome’: they follow the trend getting on board the last fashionable feature to include into their marketing plans. One year it is the ‘Facebook Page’, or the ‘Twitter account’, and the year after the ‘Mobile App’.

But as part of a more sustainable marketing and innovation strategy, the best solution may very well not be an application but rather an Open API.

Another trend to take into account as a booster for the number of APIs, is Open Data. The opening of public data by the administrations (After initiatives in the US with Data.gov and in the UK with Data.gov.uk, Etalab[10] is also about to launch the Data.gouv.fr portal of data sets in December 2011) and French cities such as Rennes[11], Paris[12] or Montpellier[13] have already exposed some data sets with some of them as APIs.

Open Data for businessese

The concept also appeals to businesses as shown by the Bluenove white paper (in French)  entitled “Open Data: what are the issues and the opportunities for the enterprise?” with sponsors such as French railways SNCF, French Post Office Group La Poste, SUEZ ENVIRONNEMENT and the French confectionary giant Poult group. The Civil Service, local governments as well as businesses will have to learn how to attract, engage and manage a community of developers but also of entrepreneurs, researchers, academics, students and companies from other industries to motivate them to use their APIs and boost their innovation.

as a conclusion: the fundamental role of developers

Martin Duval, CEO of Bluenove

Martin Duval, CEO, Bluenove

One the one hand major platforms continue relentlessly to open themselves to to more and more end users thanks to more open developments. On the other hand, developers will try to invent new applications but will also have to use an increasing number of available APIs and use new skills to detect, select, integrate them but also contribute to improve them and even ask for new ones.

One sees new types of requirements, services and skills emerging which keep the collaboration and innovation momentum going between the members of these complex ecosystems among which developers have a fundamental role to play.

__________________

[7] KPI : Key Performance Indicator

[8] Examples from this article on Mashable by Adam Kleinberg : http://mashable.com/2011/01/04/brand-open-api-developers/

[9] One of ’the 12 levers of Open Innovation’ : see http://www.slideshare.net/Bluenove

[10] EtaLab : http://www.etalab.gouv.fr/

[11] Rennes Open Data : http://www.data.rennes-metropole.fr/

[12] Paris : http://www.bluenove.com/publications/revue-de-presse/bluenove-fait-parler-les-donnees-de-la-ville-de-paris-et-le-web/

[13] Montpellier Open Data : http://opendata.montpelliernumerique.fr/Le-projet

11/22/11

Welcome to the API jungle – part I – a surfeit of APIs

Photo: Yann GourvennecWelcome to the API jungle or why developers must learn how to find, select, integrate APIs and contribute to their improvement and evolution (part I)

by Martin Duval, CEO, Bluenove

As I started writing this article at the beginning of October 2011 , the http://www.programmableweb.com/[1] web site indicated on its home page that it has identified 4007 APIs and 6175 mash-ups on a global footprint: At the moment you are reading this piece on the Visionary Marketing, I am certain these numbers are completely outdated. I agree that, Dear Developers, these numbers are still very far from your ‘Ocean of Apps’ but this new ecosystem nevertheless starts to look like a ‘Jungle of APIs’.

First and foremost, let’s take the time to put this notion of Open APIs back into its context. ‘Application Programming Interfaces’ do enable the connection between different IT platforms and the integration of different application and services through the creation of a ‘mash-up’. Open APIs proposed by a mobile or web player aim at helping the creation of an ecosystem around a common platform, therefore forming a dynamic community of creative developers who are given the opportunity to innovate faster and in many more directions, than if  they wished to do it on their own. This is therefore a genuine Open Innovation strategy in which the various players will have to initiate and maintain a long-term bond of trust, based on elements such as stability, sustainability, ease of use of the platform and the APIs, but also based on a win-win relationship with its community of partner developers.

Beyond the major web platforms (Google, Facebook, Ebay, Twitter, Amazon, etc.) that propose to the developers a big set of APIs and of course the main mobile OSes (Iphone, Android, Windows Phone, etc.) offering their SDKs[2] to support the development of mobile applications, similar open programs exist as well in the Telecom industry. Telecom operators such as Orange (with Orange API[3]), Telefonica (with their BlueVia[4] program) or Telenor (with Mobilt Bedriftsnett[5]) also allow access to third parties to some of their network assets such as SMS, click-to-call, location, storage, billing, etc. in order to facilitate the emergence of new services through the innovation potential from developers, start-ups and brands.

A signal demonstrating the need for rationalisation and standardisation in this ‘jungle of APIs’ came up with the GSMA ‘One API’[6] initiative: a success still to be confirmed.

___________________

[1] ProgrammableWeb has been aquired by Alcatel-Lucent in 2010

[2] SDK : Software Development Kit

[3] Orange API : http://api.orange.com/en

[4] BlueVia from Telefonica : https://bluevia.com/en/

[5] Telenor Mobilt Bedriftsnett : http://www.telenor.com/en/news-and-media/press-releases/2009/Telenor-opens-new-interface-to-third-parties

[6] GSMA ‘One API’ : http://www.gsmworld.com/oneapi/

01/12/11

a look into Isilon’s “plug and play” approach to storage

194_1002Our second presentation at Isilon’s headquarters was dedicated to a closer look into Isilon’s solution and the visit of its lab.Brett Hestsel, VP of engineering gave us a more detailed presentation of Isilon’s distributed storage architecture. Brett has been with Isilon for 3 years, after working 10 years at DEC and various other high tech companies and has been in the industry for 30 years. Brett said that this is the first time in his carreer that clients come to him and tell him: ”I love your product, you made me save a lot of money!”

The real secret at Isilon is in the software, Brett explained, and the Isilon operation software is named OneFS (click here for a detailed explanation of how OneFS works). With Isilon, the whole storage is managed by software and the data is distributed across different discs within a node (aka storage server). The end result is that each node only has part of the files so that if a disc fails, all the other discs (typically 120 of them per cluster) will rebuild the data or part of the data from all the other information available from the other discs. Because there are no spare discs and all is managed via software, availability rates are much higher than more traditional storage technologies, Brett said in his presentation.

a visit to the Isilon Seattle Lab

192_0997 We were also able to visit the Isilon lab (see Web photo album per below) with Xavier Guérin, regional manager for Southern Europe & Benelux.

In this lab, Isilon engineers are carrying tests on OneFS, benchmarking their solutions vs. those of competitors, testing Quality of Service (Q.O.S) and system functionality and reliability. Customers can also come to the lab and test how their applications are running in order to figure out how they run on Isilon’s storage servers.

a plug and play approach

Xavier explained that Isilon’s differentiator is that the company started from a blank sheet and didn’t have to be burdened with the existing technology and its limitations. This is has made it possible for Isilon to reinvent data storage and the way it’s managed with its distributed architecture. “Isilon applied the principles of grid computing to storage” Xavier said and this is how it works.

Each of the nodes (that is to say each rack that Xavier is pointing to in the above picture) incorporates a number of discs (typically from 12 to 36 discs) and are built in a “plug and play” fashion. This means that a rack can be added or pulled off seamlessly. “kill a node and the data will reconstruct itself across the different other nodes and disks” Xavier said. This is what makes this solution different.

a great variety of clients and industries using Isilon

The media and especially the 3D cartoon movie industry – Walt Disney namely – is relying heavily on this technology to improve performance and cost-effectiveness. Other clients include the Broad institute (life science research institute), Stratscale hosting services, Facebook , Dailymotion.com (a French contender to YouTube), Peugeot (using Isilon for crash tests), CNG (part of French nuclear research body ‘CEA’) and STMicro electronics as well as the Orange group itself (on servers operated by Orange multimedia business services, a department of Orange Business Services, on their premises)

the album of our visit to the Seattle Isilon lab

01/10/11

Emulex is driving convergence in the data centre

Our last but one visit in the Silicon Valley on November 18 was with Emulex, and we were greeted by Steve Daheb, CMO & SVP business development and Shaun Walsh, VP Corporate Marketing

backgrounder

199_1019 The Emulex vision is about the fact they realised that networks were about to converge for the first time:  data networks used to be separated from storage networks. With the event of 10Gb ethernet we are seeing things converge. Emulex is a 30-year company founded in 1978, based in Costa Mesa, Calif. with 960 employees. Emulex works with HP, IBM and all other major companies. Emulex is also a great partner of Blade Network Technologies’ which we have visited previously.

There are 2 product lines at Emulex: 1) host servers 2) back-end storage which is a legacy business for Emulex. Emulex has a legacy business in Fibre channel (a technology used to build SANs in which Cisco and Brocade are leaders).

Recently, Emulex acquired ServerEngine, which with they have worked for 2 years. They closed the deals with HP and IBM with ServerEngine and Emulex is winning quite a few deals in which their chips are equipping OEM 10 Gb Ethernet equipment and 9 out of 10 tier 1 OEMs have picked them for convergence (some exclusively, some not), amongst which HP, Dell, Cisco, Fujistu Siemens and IBM. A big part of the company is already working on what will come next and will be delivered in 2012 and 2013.

the data centre of the future

A lot of enterprises are still at the crossroads with data, email and storage networks all considered 3 distinct networks. With this kind of architecture, customers are compelled to upgrade their data centres and cost is a big issue. A second step is to “move everything into virtual data centres and this is where we are today” Shaun added.

But in the last 24 months, we have seen the rise of converged fabric-based computing as announced by HP (Flexfabric), IBM (virtual fabric) and Cisco. Convergence is happening, for server virtualisation, switching, storage and adapters (LOMs i.e. LAN on Motherboard).  These past 24 months, convergence has been flavour of the month and is showing the way forward.

from servers to blades and lastly … to “cookie sheets

cookie-sheets The 10Gb Ethernet is a new inflection point which is a sign for innovation, and such innovation is made flesh with 4 main trends which Steve described here. These 4 trends are defining the future of data centres:

  1. data is growing at an increasing rate and this is driving huge demand in bandwidth,
  2. IT priorities are based around how one can deploy more quickly,
  3. cost drivers are prevalent: lower capex/opex and data centre consolidation,
  4. technology trends such as “cloud computing” which necessitate a convergent environment, mostly for SMEs.

Behind these 4 trends, Steve described 3 business drivers which are impacting the way that data centres are being architected.

  1. SMBs an remote office data centres
  2. enterprise hybrid data centres: a mixture of internal data centres for data privacy and public cloud applications for cost reasons
  3. hyper scale data centres

Data is growing at a staggering compound annual growth rates of approx. 50%. More users mean more transactions and convergence is therefore required. 9m servers will be shipped in 2010, 1m+ will be of the new breed, Shaun said and he added that we are moving from servers to blades and now what Shaun called cookie sheets because they are so thin.

from a networks point of view, what is driving change?

199_1025

Steve described the 5 reasons for change in the data centre from the network perspective:

  1. the explosion of devices which will eventually lead to the replacement of the laptop, and the explosion of content and transactions,
  2. new technology is being brought to the edge – as the new stuff being brought by Riverbed – and more intelligence is needed in the data centre,
  3. video, web-enabled mobile is boosting traffic so much so that switch port shipments are rocketing up wit happrox. 60% CAGR (see above photo),
  4. security is also a huge driver,
  5. network traffic is not only driven by consumers but also enterprises, with mobile acces to applications on mobiles.

the data centre of the future requires a new networking model

Emulex sees his role as that of a leader in that move into convergence, partnering with BNT, VMWare and all their OEMs.

the management mind meld book available for download

management mind meld
If you want to know more about the Emulex vision, we recommend you download their online e-book “the management mind meld”

you can also follow Emulex on Twitter at @emulex

Converged networking results in an overlap of network and storage administrators’ responsibilities. This guide explains networking and storage basics to help each administrator better understand the changes resulting from converged networking and how it will impact their role in the data center.

11/29/10

Rod Boothby: “with Joyent, service providers can deploy Clouds in a matter of weeks”

The last visit of our November press tour in the Silicon Valley took place in downtown San Francisco, with Joyent, an innovative company dedicated to Cloud deployments. Joyent is now planning to deploy in Europe, starting with France and the UK. A few important announcements were made at this meeting. We were greeted by Bryan Brown and Rod Boothby, respectively SVP Business Development and VP Global Business Development

introduction

199_1037 Joyent’s mission statement is simple: “the best in class software for cloud operators”. Joyent’s main customers are public cloud operators. The company was founded in 2004 and the cloud offering was launched in 2006. In 2009, Intel invested in Joyent and on November 19, 2010, DELL signed an OEM agreement with Joyent.

“Joyent isn’t the oldest, but one of the oldest Cloud operators” Brown added.

Joyent thinks it “is the only software company to build a complete Cloud stack”. Other companies have software stacks and others operate Clouds, whereas “we do both” Boothby said, “and we think that our only competitor is Microsoft”.

offering providers the “most profitable Cloud”

joyent Joyent’s goal is simple, they want to “offer service providers, the most profitable Cloud”. VMWare’s approach to virtualise servers, but Joyent’s solution is a complete data centre virtualisation offer. Here are somoe of Joyent’s differentiators.

  • operating system: a team of former SUN developers joined Joyent. That means that eveything can be optimised in the Cloud” Boothby said,
  • broad range of models can be offered to clients and more breadth of performance and better scalability,
  • the file system that Joyent is using is based on ZFS and it allows them to cache (mutilple tier cache approach mixing RAM and SSDs) and as a result is can run Windows a lot faster than anyone else.

The Joyent partner list includes players like load balancing company Zeus, New relic and Cohesion, Intel and Arista. Joyent is using Arista to manage their switching and this is making it possible to better control the cloud.

in real life: two striking examples

  • Here is a proven example with Gilt Groupe who – thanks to Joyent – is spending less than 1% of its revenue on infrastructure, which is 70% better than the average spend on that kind of things,
  • LinkedIn: uses Joyent to deploy all their ancillary project (mobile.linkedin.com for instance, is running on Joyent). What it means is that companies like LinkedIN can launch and only scale up if they are successful. Note: LinkedIn started in 2003, before Joyent launched and therefore, its main service is running off a legacy infrastructure. Bumper Sticker from LinkedIn is working off Joyent servers too,
  • a list of clients and business cases is available from Joyent’s web site.

the Joyent PAAS offering: node.js

The Platform as a service offering of Joyent’s is characterised by HTML5, CSS and Javascript. What is revolutionary is the non-io blocking server side javascript which is making it possible to run millions of users and it allows 784,000 requests per second (vs. approx. 40,000 requests per second for Google), “which is insane”, Boothby exclaimed.

What Joyent is claiming is that node.js is fast and light enough to support the “Internet of Things”

Becoming a public cloud service provider

If a service provider wanted to set up a public cloud for its clients, they would be able to do this in a matter of weeks, Boothby explained. Servers can be Dell but Joyent is Intel-based, so that other vendors can be chosen.

There are no limits to the number of virtual machines it can handle, and single sign on is included, and can be integrated with whatever legacy customer and billing system you have, Boothby explained.

Why bother? because there is more revenue per machine. On a JOyent cloud, one can generate 4 to 5 times more revenue per machine Rod Boothby explained, and this is based on our experience (they support over 30,000 customers, thousands of applications and billions of page views), and Joyent is confident that the only Cloud that will stay in such a competitive market is the one that is the most profitable.

This kind of turnkey approach means services too! This is why Joyent partnered with Dell services (formerly Perot systems). The Dell partnership will start immediately in the US but there are plans to expand in Europe and Asia, and “very strong in Asia”. “We have a long-standing relationship with Dell” explained Nema Badley, Director of Marketing at Joyent, precising that Joyent was running another press meeting at the same time in San Francisco.

PAAS and Cloud computing

In Joyent’s mind, there is a difference between PAAS (platform as a service) and Cloud computing, as PAAS is part of Cloud computing but Cloud goes beyond platforms. In the following video, I have asked our Joyent hosts to expatiate on this differentiation.

11/24/10

Blade Network Technologies: “IBM now has the network fabric to solve its clients’ pain-points in the data centre”

change of logo at blade network technologies Back in June 2010, we visited Blade Network Technologies, a Nortel spin-off which was doing exceptionally well in the data centre business by providing networking infrastructure and services within the data centre; a concept Vikram Mehta had described as being the “network fabric” of the data centre. We paid another visit to BNT on November 18 at their new premises in Santa Clara, but this time, BNT is no longer a start-up  but an IBM company. So what has changed since then and what is in store for the market in that area?

197_1014“Not much has changed in BNT Mehta said, apart from the red bar in the E of the Blade logo which has became blue” (as shown in our logo montage on top of this post) Mehta said, but the IBM deal was described in detail by the CEO of BNT as being a real oppportunity for IBM clients.

It’s actually not quite true that things haven’t changed since June. BNT is now 322 people-strong, vs. 250 when we last met in June 2010, hiring more people in sales, engineering and customer support. BNT is also very close to the 10m shipped ports bar which was their goal at the beginniing of the year and they are now #2 in data centre switching.

On Oct 29, IBM closed the acquisition of BNT, a deal which had started on September 27, 2010. “As was discussed at our latest meeting in June” Mehta went on, “what our customers are worried about is the scaling of their data centre infrastructure”. The architecture of the future will be made of many IT elements, federated together: “but how do you connect such elements?” Vikram Mehta asked: “with the network fabric” he said, “and now IBM has that network fabric”.

“The maximisation of the IT infrastructure has very much been discussed via the world ‘virtualisation’” Vikram Mehta went on and IBM was a pioneer in virtualisation years ago. Now, Blade Network Technologies is bringing network virtualisation to the lot so that IBM is able to “deliver in all the areas of virtualisation: server, storage and network” Mehta said. Besides, because networks are the weakest link in IT security, and because BNT has a track-record in network security, IBM is also able to better protect its clients’ data centres. There are many other areas in which BNT can team up with IBM beyond architecture improvements such as data warehousing (through the recent acquisition of Netezza by IBM), TCO reduction.

IBM focus: “best of breed solutions”

How is BNT linked to IBM and through which division? There are 2 things. IBM exited the networking business in 1999 and in 2010, almost 10 years later, they re-entered the market through the acquisition of BNT. BNT will report to the systems and technology group (STG) and will form the networking division within that group, Mehta explained. Tivoli does IT service management at the highest level, Mehta added, and BNT has already been integrated in that picture for a long time and this integration will only get better under the new management.

The approach , compared to other visions on the market is far less proprietary, a vision in which customers choose what is best for them (NAS, fiberchannel, Tivoli, Openview, Opsware etc.) we will continue to provide linkages to all these systems.

By focussing on the system, IBM is focussed on “best of breed” regardless of where elements come from, and “IBM is committed to best of breed” Mehta insisted. Hence, all the agreements that BNT had in the past, with Cisco, Brocade and Juniper will continue in the future. Some solutions are brought to the market by IBM themselves, some are third party and delivered by IBM.

what does it change for BNT

197_1015 It is easy to see what IBM is getting in that deal, but what are BNT’s own objectives? “Our focus is to grow our share of the pie” (see Gartner stats on the righthand side), Mehta responded, and BNT will achieve this through innovation and strategic partnerships, Mehta added.  He also mentioned that the slice of the pie would grow because the overall market itself would be growing too.

IBM will be working with co-opetitors HP and NEC

Mehta insisted that the business that BNT was doing with HP and NEC will not only continue but will be re-inforced.