web-archive-net.com » NET » G » GARYBERGER.NET

Total: 65

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Your Data Center Is In My Way | garyberger.net
    Take Any App to the Cloud event May 17th NYC Next Post Stack Wars and the Rise of the SDN Republic Recent Posts To the land of plenty Moving towards high performance cluster management The Reactive Manifesto is not new but gives us a When we experience the world each moment is immut Smart Driving with automatic http t co ZqUZu5vT Tessel JavaScript and WiFi enabled micro control Search for

    Original URL path: http://garyberger.net/?p=643 (2016-04-25)
    Open archived version from archive


  • Network Mechanics to Network Conductors | garyberger.net
    information modeling definition called YANG Finally you can divorce the information model from the data transfer protocol and allow for a cleaner representation of the network configuration But is this as far as we need to go Why is SDN so interesting and what is it telling us about the still very complex problems with building and operating networks As the title of the blog suggests I think something can be said for the expertise required to manage complex systems Question becomes are you going to stay being a mechanic and worrying about some low level details or are you going to be the pilot Is it valuable to your employer for you to understand the low level semantics of a specific implementation or rise above by creating proper interfaces to manipulate the state of the network through a reusable interface With information becoming more valuable than most commodities it will take a shift in mindset to move from low level addressing concerns to traffic analysis modeling and control Understanding where the most important data is how to connect to it and avoid interference will become much more important than understanding protocols So how does SDN contribute to this and how do we get from the complex set of tasks of setting up and operating networks to more of a fly by wire approach How do we go from managing a huge set of dials and instruments to managing resources like a symphony The first thing to recognize is you can t solve this problem in the network by itself For years application developer s expectations of the network were of infinite capacity and zero latency They perceived that the flow control capability in the network would suffice giving them ample room to pummel the network with data Locality was far behind even an after thought because they were developing on local machines unaware of the impact of crossing network boundaries Networking guys use terms like latency jitter bandwidth over subscription congestion broadcast storms flooding while application developer s talk in terms of consistency user experience accuracy and availability The second thing to recognize is the network might need to be stripped down and built back up from scratch in order to further deal with its scaling challenges In my eyes this is the clearest benefit to SDN as it highlights some of the major challenges in building and running networks Experimenting with a complex system is disastrous in order to break new ground it must be decomposed into its simplest form but certainly no simpler as Einstein would say Its possible that OpenFlow has gone this route and must be redesigned into a workable set of primitive functions which can be leveraged not just through a centralized controller model but also to adapt new Operating Systems and protocols to leverage the hardware There is much debate over what the best model is here and what the objectives are Since most networking is basically a craft and not a science

    Original URL path: http://garyberger.net/?p=632 (2016-04-25)
    Open archived version from archive

  • OpenFlow | garyberger.net
    the Internet is comprised of two name spaces what we call the Domain Name System and the Internet Address Space These turn out to be just synonyms for each other in the context of addressing with different scope Generally we can describe an address space as consisting of a name space with a set of identifiers within a given scope An address space in a modern computer system is location dependent but hardware independent thanks to the virtual memory manager and memory virtualization The objective of course is to present a logical address space which is larger than the physical memory space in order to give the illusion to each process that it owns the entire physical address space This is a very important indirection mechanism if we didn t have this applications would have to share a much smaller set of available memory Does anyone remember DOS Another problem with TCP IP is that the real name of an application is not the text form that humans type it s an IP address and its well known port number As if an application name were a macro for a jump point through a well known low memory address Professor John Day Binding a service which needs to be re locatable to a location dependent address is why we have such problems with mobility today in fact we may even conclude that we are missing a layer Given the size and failure rates of today s modern data centers this problem also impacts the reliability of the services and applications consumers are so dependent on in todays web scale companies So while this is a very important part of OS design its completely different from how the Internet works because the address system we use today has no such indirection without breaking the architecture i e NATS Load Balancers etc If this is true is the IP address system currently used on the Internet location dependent Well actually IP addresses were distributed as a location independent name not an address There are current attempts to correct this such as LISP HIP as well as BeyondIP solutions such as RINA So it turns out the root of the problem in relation to addressing is that we don t have the right level of indirection because according to Saltzer and Day we need a location independent name to identify the application or service but all we have is a location dependent address which is just a symbolic name What is encapsulation Object Oriented Programming refers to encapsulation as a pattern by which the object s data is contained and hidden in the object and access to it restricted to members of that class In networking we use encapsulation to define the different layers of the protocol stack which as we know hides the data from members not in the Layer in this way the protocol model forms the hour glass shape minimizing the interface and encapsulating the implementation Sidebar Leaky Abstractions Of course this isn t completely true as the current protocol model of TCP IP is subject to a leaky abstraction For instance there is no reason for the TCP logic to dive into the IP frame to read the TOS data structure doing so would be a Layer Violation but we know that TCP reaches into IP to compute the pseudo header checksum This rule can be dismissed if we think of TCP IP as actually one layer as it was before 1978 But the reality of the broken address architecture leads to the middle boxes which must violate the layers in order to rewrite the appropriate structures to stitch back together the connection So how does encapsulation help In networking we use encapsulations all the time We essentially encapsulate the data structures which need to be isolated the invariants with some other tag header etc in order to hide the implementation So in 802 1Q we use the C TAG to denote a broadcast domain or VLAN in VXLAN we encapsulate the host within a completely new IP shell in order to bridge it across without leaking the protocol primitives necessary for the host stack to process within a hypervisors stack From the blog encapsulation provides the closest analog to the hierarchical memory virtualization in compute So in the context of a hierarchy yes we encapsulate to hide but not for the same reasons we have memory hierarchies i e SRAM cache and DRAM This generalization is where the blog post goes south So really what is the root of the problem and how is SDN an approach to solve it From an earlier statement we need a location independent name to identify the application or service but all we have is a location dependent address which is just a symbolic name If we go back to Saltzer we see that s only part of the problem as we need a few more address names and the binding services to accomplish that One interesting example to this is the implementation of Serval from Mike Freedman at Princeton University Serval actually breaks the binding between the application service name and the inter networking address Although there are deeper problems then this since we seem to be missing a network layer somewhere Serval accomplishes this through the manipulation of forwarding tables via OpenFlow although it can be adapted to use any programmable interface if one exists Another example is the NDN Project led by Van Jacobson In summary Yes it is unfair to conflate Network Virtualization with OS Virtualization as they deal with a different level of abstraction state and purpose Just as hypervisors were invented to simulate a hardware platform there is the need to simulate or abstract the network in order to build higher level services and simplify the interface not necessarily the implementation In fact a case can be made that OS Virtualization may eventually diminish in importance as we find better mechanisms for dealing with isolation and

    Original URL path: http://garyberger.net/?cat=26 (2016-04-25)
    Open archived version from archive

  • NodeFlow: An OpenFlow Controller Node Style | garyberger.net
    network environment within their own local machine Instructions on how to setup the development environment can be seen here Download and Get Started with Mininet Code review We first setup the network server with a simple call to net createServer which we provide the port and address to listen on The address and port are configured through a separate start script NodeFlowServer prototype start function address port var self this var socket var server net createServer server listen port address function err result util log NodeFlow Controller listening on address port self emit started Config server address The next step provides the event listeners for socket maintenance creates a unique sessionID from which we can keep track of each of the different switch connections and our main event process loop which is called every time we receive data on our socket channel We use a stream library to buffer the data and return us the OpenFlow decoded message in the msgs object We make a simple check on the message structure and then pass it on for further processing server on connection function socket socket setNoDelay noDelay true var sessionID socket remoteAddress socket remotePort sessions sessionID new sessionKeeper socket util log Connection from sessionID socket on data function data var msgs switchStream process data msgs forEach function msg if msg hasOwnProperty message self processMessage msg sessionID else util log Error Message is unparseable console dir data In the last section we leverage Node JS EventEmitters to trigger our logic using anonymous callbacks These event handlers wait for the specific event to happen and then trigger processing We handle three specific events just for this initial release OFPT PACKET IN which is the main event to listen on for PACKET IN events and SENDPACKET which simply encodes and sends our OF message on the wire self on OFPT PACKET IN function obj var packet decode decodeethernet obj message body data 0 nfutils do l2 learning obj packet self forward l2 packet obj packet self on SENDPACKET function obj nfutils sendPacket obj type obj packet outmessage obj packet sessionID The Hello World of OpenFlow controllers simply provide a learning bridge function Here below is the implementation which is fundamentally a Python port of NOX Pyswitch do l2 learning function obj packet self this var dl src packet shost var dl dst packet dhost var in port obj message body in port var dpid obj dpid if dl src ff ff ff ff ff ff return if l2table hasOwnProperty dpid l2table dpid new Object create object if l2table dpid hasOwnProperty dl src var dst l2table dpid dl src if dst in port util log MAC has moved from dst to in port else return else util log learned mac dl src port in port l2table dpid dl src in port if debug console dir l2table Alright so seriously why the big deal There are other implementations which do the same thing so why is NodeFlow so interesting Well if we look at setting

    Original URL path: http://garyberger.net/?p=537 (2016-04-25)
    Open archived version from archive

  • Enterprise Cloud | garyberger.net
    lives through technology So lets take a closer look at the App Internet model Hmm So how is this different from todays Web Centric application architecture After all isn t a web browser like Chrome and Safari an application Jim Gray defined the ideal mobile task to be stateless no database or data access has a tiny network input and output and has a huge computational demand 7 To be clear his assumptions of course were that transport pricing would be rising to make the economics infeasible but as we know the opposite effect happened as transport pricing has fallen 8 Most web and data processing applications are network or state intensive and are not economically viable as mobile applications Again the assumptions he had about telecom pricing made this prediction incorrect He also contended that Data loading and data scanning are cpu intensive but they are also data intensive and therefore are not economically viable as mobile applications The root of is conjecture was that the break even point is 10 000 instructions per byte of network traffic or about a minute of computation per MB of network traffic Clearly the economics and computing power has changed significantly in only a few short years No wonder we see such paradigm shifts and restructuring of architectures and philosophies The fundamental characteristic which supports a better experience is defined as latency We perceive latency as the responsiveness of an application to our interactions So is he talking about the ability to process more information on intelligent edge devices Does he not realize that a good portion of applications written for web are built with JavaScript and that the advances in Virtual Machine technology like Google V8 is what enables all of that highly immersive and fast responding interactions Even data loading and data scanning has improved through advances in AJAX programming and the emerging WebSockets protocol allowing for full duplex communications between the browser and the server in a common serialization format such as JSON There will always be a tradeoff however especially as the data we consume is not our own but other peoples For instance the beloved photo app in Facebook would never be possible utilizing an edge centric approach as the data actually being consumed is from someone else There is no way to store n 2 information with all your friends from an edge device it must be centralized to an extent For some applications like gaming we have a high sensitivity to latency as the interactions are very time dependent both for the actions necessary to play the game but also how we take input for those actions through visual queues in the game itself But if we look at examples such as OnLive which allows for lightweight endpoints to be used in highly immersive first person gaming clearly there is a huge dependency on the network This is also the prescriptive approach behind Silk although Colony talks about this in his context of App Internet The reality is that the Silk browser is merely a renderer All of the heavy lifting is done on the Amazon servers and delivered over a lightweight communications framework called SPDY Apple has clearly dominated pushing all focus today on mobile device development The App Internet model is nothing more than the realization that Applications must be in the context of the model something which the prior cloud and web didn t clearly articulate The Flash wars are over or are they So what is the point of all of this App Internet anyway Well the adoption of HTML5 CSS3 JavaScript and advanced libraries code generations etc have clearly unified web development and propelled the interface into a close to native environment There are however some inconsistencies in the model which allows Apple to stay just one step ahead with the look and feel of native applications The reality is we have already been in this App Internet model for sometime now ever since the first XHR XMLHttpRequest was embedded in a page with access to a high performance JavaScript engine like V8 So don t be fooled without the network we would have no ability to distribute work and handle the massive amount of data being created and shared around the world Locality is important until its not at least until someone build a quantum computer network over and out http news cnet com 2100 1001 984051 html http www techspot com news 45887 researchers using salt to increase hard drive capacity html http en wikipedia org wiki 4g http www fiercewireless com story real world comparing 3g 4g speeds 2010 05 25 http www businesswire com news home 20110923005103 en Xelerated Begins Volume Production 100G Network Processor http en wikipedia org wiki Jevons paradox http research microsoft com apps pubs default aspx id 70001 http drpeering net white papers Internet Transit Pricing Historical And Projected php Note This is more representative as a trend rather than wholly accurate assessment of pricing Data Center Enterprise Cloud Future Internet Cloud Networking Hyper or Reality December 14 2011 Gary Berger A colleague of mine pointed out a new post by Jayshree Ullal from Arista Networks on Cloud Networking Reflections I can t help to comment on a few things for my own sanity Prediction 1 The rise in dense virtualization is pushing the scale of cloud networking Evaluation 1 True IT is very trend oriented meaning sometimes the complexity of operating a distributed system are people are too busy look deep into the problem for themselves and instead lean on the communities of marketing wizards to make a decision for them Despite VMWare s success hardware virtualization makes up a very small part of the worldwide server base which is estimated at around 32M servers 1 I predict within a few short years a reversal in this trend which peaked around 2008 for several reasons One is the realization that the hardware virtualization tax grows increasingly with I O a very significant problem as we move into the era of Big Data The reality is as we move to more interactive and social driven applications the OS container is not as crucial as it is in a generalized client server model Application developers need to continuously deal with higher degrees of scalability application flexibility improved reliability and faster development cycles Using techniques like Lean software development and Continuous Delivery application developers can get a Minimal Viable Product out the door in weeks sometimes days Two the age of Many Task Computing is upon us and will eventually sweep away the brain dead apps and the entire overhead that comes with supporting multiple thick containers I say lets get down with LXC or better yet Illumos Zones which gives us the namespace isolation without the SYSCALL overhead Three heterogeneous computing is crucial for interactive and engaging applications Virtualization hides this at the wrong level we need the programming abstractions such as OpenCL WebCL for dealing with specialization in vector programming and floating point support via GPU s Even micro servers will have a role to play here allowing a much finer grain of control while still improving power efficiency Its not dense virtualization pushing the scale of cloud networking it is the changing patterns of the way applications are built and used This will unfortunately continue to change the landscape of both systems design as well as network My Advice Designers will finally wake up and stop being forced into this hyper virtualized compute arbitrage soup and engineer application services to exploit heterogeneous computing instead of being constrained by a primitive and unnecessary abstraction layer In the mean time ask your developers to spend the time to build scalable platform services with proper interfaces to durable and volatile storage memory and compute In this way you isolate yourself from specific implementations removing the burden of supporting these runaway applications Prediction 2 Fabric has become the marketing buzzword in switching architectures from vendors trying to distinguish themselves Evaluation 2 Half True I think the point of having specialized fabrics is a side effect of the scalability limits of 1990 s based network design protocols and interconnect strategies Specialized and proprietary fabrics have been around for years Think Machines Cray SGI and Alpha all needed to deal with scalability limits connecting memory and compute together Today s data centers are an extension to this and have become modern super computers connected together i e a fabric Generally the current constraints and capabilities of technology today have forced a rethink on how to optimize network design for a different set of problems There is nothing terribly shocking here unless you believe that current approaches are satisfactory If the current architectures are satisfactory why do we have so much confusion on whether to use L2 multi pathing or L3 ECMP Why is there not ONE methodology for scaling networks Well I ll tell you if you haven t figured it out Its because the current set of technologies ARE constrained and lack the capabilities necessary for truly building properly designed networks for future workloads The beauty of Arista s approach is we can scale and manage two to three times better with standards I fail to understand the need for vendor specific proprietary tags for active multipathing when standards based MLAG at Layer 2 or ECMP at Layer 3 and future TRILL resolves the challenges of scale in cloud networks Scale 2x to 3x better with standards How about 10x or better yet 50x Really 2 3x improvement in anything is statistically insignificant and you are still left with corner cases which absolutely grind your business to a halt Pointing out MLAG is better than TRILL or SPB or ECMP is better than whatever is not the point I mean really how many tags do we need in a frame anyway and what the hell with VXLAN and NVGRE Additional data plane bits are not the answer we need to rethink the layering model address architecture and error and flow control mechanisms There is no solution unless you break down the problem layer by layer until you remove all of the elements down to just the invariants Its possible that is the direction of OpenFlow SDN the only problem maybe that completely destroys the layers entirely but maybe that s the only way to build them back up the right way BTW There is nothing really special about saying standards after all TCP IP itself was a rogue entry in the standards work INWG 96 so its another accidental architecture that happened to work for a time My Advice For those who have complete and utter autonomy treat the DC as a giant computer which should be designed to meet the goals of your business within the capabilities and constraints of todays technology Once you figure it out you can use the same techniques in software to OpenSource your innovation making it generally feasible for others to enter the market if you care about supply chain For those who don t ask your vendors and standards bodies why they can t deliver a single architecture which doesn t continuously violate the invariances by adding tags encaps bits etc Prediction 4 Commercially available silicon offers significant power performance and scale benefits over traditional ASIC designs Evaluation 4 Very true Yea no surprise here but its not as simple as just picking a chip off the shelf When designing something as complex as an ASIC you have to make certain tradeoffs Feature sets build up over time and it takes time to move back to a leaner model of primitive services with exceptional performance There is no difference between an ASIC designer working for a fabless semiconductor company spinning out wafers from TSMC and a home grown approach it is in the details of the design and implementation with all of the sacrifices one makes when choosing how to allocate resources My Advice Don t make decisions based on who makes the ASIC but what can be leveraged to build a balanced and flexible system The reality is there is more to uncover than just building ASIC s for instance how about a simpler data plane model which would allow us to create cheaper and higher performance ASIC s Prediction 5 FCoE adoption has been slow and not lived up to its marketing hype Evaluation 5 True A key criterion for using 10GbE in storage applications is having switches with adequate packet buffering that can cope with speed mismatches to avoid packet loss and balance performance This is also misleading as it compares FCOE with FC with 10GE sales as a way of dismissing a viable technology But the reality is that the workload pattern changed moving the focus from interconnect to interface From an application development point of view interfacing with storage at a LUN or block level is incredibly limited It s simply just not the right level of abstraction which is why we started to move to NAS or file based approaches and even converging the reemergence of content based and distributed object stores Believe me developers don t give a care if there is an FC backend or FCOE it is irrelevant the issue is performance When you have a SAN based system you are dealing with a system balanced for dealing with different patterns of data access reliability and coherency This might be exactly what you don t want you may be very write intensive or read intensive and require a different set of properties than current SAN arrays provide The point about adding buffering to the equation not only makes things worse but also increases the cost of the network substantially Firstly the queues can build up very quickly especially at higher clock speeds and the impact on TCP flow control is a serious issue I am sure the story is not over and we will see different ways of dealing with this problem in the future You might want to look a little closer at FC protocols and see if you can see any familiarity with TRILL My Advice Forget the hype of Hadoop and concentrate on isolating the workload patterns that impact your traffic matrix Concentrate on what the expectations of the protocols are how to handle error and flow control mobility isolation security and addressing Develop a fundamental understanding of how to impart fair scheduling in your system to deal with demand floods partitioning events and chaotic events Turns out a proper load shedding capability can go along way in sustaining system integrity Yes I know thats a lot of opaque nonsense and while many advantages exist for businesses which choose to utilize the classical models there are still many problems in dealing with the accidental architecture of todays networks The future is not about what we know today but what we can discover and learn from our mistakes once you realize we made them While I do work at Cisco Systems as a Technical Leader in the DC Group these thoughts are my own and don t necessarily represent those of my employer 1 http www mediafire com file zzqna34282frr2f koomeydatacenterelectuse2011finalversion pdf Enterprise Cloud Distributed Computing February 4 2010 Gary Berger We can all agree that we are in the midst of a shift in the practice of information technology delivery fueled by economization global interconnection and changes in both computer and social sciences Although this can be considered revolutionary change in some circumstances it is rooted in problems known almost 20 years ago For those of you interested in the history and a very clairvoyant look at this current shift read A Note on Distributed Computing This paper concentrates on integration of the current language model to address the issues of latency concurrency and partitioning in distributed systems They Programmers look at the programming interfaces and decide that the problem is that the programming model is not close enough to whatever programming model is currently in vogue A furious bout of language and protocol design takes place and a new distributed computing paradigm is announced that is compliant with the latest programming model After several years the percentage of distributed applications is discovered not to have increased significantly and the cycle begins anew This paper concludes with very specific advice Differences in latency memory access partial failure and concurrency make merging of the computational models of local and distributed computing both unwise to attempt and unable to succeed Now there are a few things not known back in 1994 including where exactly Moores Law would take us language development ubiquitous device access and the scale at which the Internet has grown but when you examine the issues discovered by the likes of Google Amazon Facebook etc you recognize that the cycle has indeed begun anew The interesting part is the velocity of innovation to solve these problems along with the cooperative nature of open source software has fueled an even broader manifestation of change and companies of all sizes can help contribute to the greater good of open software enabling communities of interest to develop and share information in an open yet secure way 1 Waldo Wyant Wollrath Kendall Sun Microsystems 1994 SMLI TR 94 29 Enterprise Cloud Amazon on the Enterprise October 26 2009 Gary Berger Last week I attended the AWS Cloud for the Enterprise Event held in NY and was not surprised to see a massive turnout for Dr Vogels keynote and the following customer presentations I have been to the last three events held in NY and each time the crowds get larger the participants become more enthusiastic and the level of innovation continues to accelerate at a mad pace Some of the customer representatives included Conde Naste Digital including Wired com Nasdaq OMX Sony Music Entertainment and New York Life I thought the most compelling discussion was given by Michael Gordon First VP New York Life where he discussed how he migrated

    Original URL path: http://garyberger.net/?cat=19 (2016-04-25)
    Open archived version from archive

  • MultiCore-ManyCore | garyberger.net
    time data grid enabled networking With 40 Intel cores and 1TB of memory available to Gigaspaces XAP high performance In Memory Data Grid the system achieved an astounding 500 000 Ops sec on 1024B POJO the system could load 1 Billion objects in just under 36 minutes Now this might not sound extraordinary but when you consider how to build an application where the bottleneck on a 40 core 1TB system is CPU and Memory bound properly deal with failures and have automation and instrumentation you can t beat this kind of system Gigaspaces is also integrated into Cisco UCS XML API for dynamic scaling of hardware resources Eventually people will catch on that memory is critical for dealing with Big Data and it s no longer an issue of reliability or cost Without disk rotational latency in the way and poor random access we can push the limits of our compute assets while leveraging the network for scale Eventually we might see a fusion of in memory data grids with network in a way which allows us to deal with permutation traffic patterns by changing the dynamics of networking processing and storage MultiCore ManyCore Intel RapidMind Better utilization of multi core August 24 2009 Gary Berger It is long understood that programming for SMP and multi core architectures are extremely difficult in many factors Managing concurrency and serialization are important aspects of multi threaded application development The purchase of RapidMind by Intel boosts Intel s toolbox for helping the compiler jockeys fully utilize multi core many core architectures for their C programs RapidMind offers simple constructs for developers to decorate their code utilizing an open interface through an array construct to remove the developer from worrying about the atomic entity of execution RapidMind provides an abstraction layer which allows

    Original URL path: http://garyberger.net/?cat=18 (2016-04-25)
    Open archived version from archive

  • Platform as a Service | garyberger.net
    needed manually or dynamically You can re balance the system and spread the partitions across all the existing JVMs It is up to you to determine how far you want to scale the system which means you have total control on system behavior The routing mechanism with GigaSpaces will function without any problems and spread data across all partitions as long as you have more unique keys than the amount of partitions This should not be a problem with 99 99 of the cases The comparison ignores many other GigaSpaces features such as Mule integration Event handling and data processing high level building blocks Web container and dynamic HTTP configuration Service management system management tools performance especially for single object operations batch operations and local cache text search integration massive amount of client support large data support up to several Tera data large object support Map Reduce API Scripting languages support Java NET C Scala Groovy Cloud API support schema evolution etc Having new players is great and verifies that there is room for new vendors in this huge market for In Memory Data Grid technologies on the cloud private public But it is important also to do the right comparison See more here http www gigaspaces com wiki display SBP Capacity Planning http www gigaspaces com wiki display CCF CCF4XAP Documentation Home Platform as a Service What Should Be VMWares Next Move August 14 2009 Gary Berger I wanted to point out an interesting article posted here on CIO com Here is an excerpt The most glaring omission in VMware s portfolio is the need for Java object distributed caching to provide yet another alternative to scalability Ovum analyst Tony Baer said in a post to his personal blog on Tuesday If you only rely on spinning out more virtual machines you get a highly rigid one dimensional cloud that will not provide the economies of scale and flexibility that clouds are supposed to provide So we wouldn t be surprised if GigaSpaces or Terracotta might be next in VMware s acquisition plans Now I couldn t be more happy that someone besides myself recognizes that in order for services to be uncoupled from the persistence layer you must have a distributed caching system There are several players not all created equal but all with value in this field They include Gigaspaces Terracotta Oracle Tangasol Coherence and Gemstone Distributed caching is nothing new and most of the large internet companies like FaceBook Twitter etc are utilzing open source tools like memcache to get a very rudimentry distributed cache Gartner analyst Massimo Pezini is right on with his comment I think one of the reasons why VMware is buying SpringSource is to be able to move up the food chain and sell cloud enabled application infrastructure on top of their virtualization infrastructure Pezzini said It wouldn t take much to make it possible to deploy Spring on top of the bare VMware i e with no Linux or Windows in

    Original URL path: http://garyberger.net/?cat=14 (2016-04-25)
    Open archived version from archive

  • Cisco UCS “Cloud In A Box”: Terabyte Processing In RealTime | garyberger.net
    in other words forget about Hadoop lets go real time data grid enabled networking With 40 Intel cores and 1TB of memory available to Gigaspaces XAP high performance In Memory Data Grid the system achieved an astounding 500 000 Ops sec on 1024B POJO the system could load 1 Billion objects in just under 36 minutes Now this might not sound extraordinary but when you consider how to build an application where the bottleneck on a 40 core 1TB system is CPU and Memory bound properly deal with failures and have automation and instrumentation you can t beat this kind of system Gigaspaces is also integrated into Cisco UCS XML API for dynamic scaling of hardware resources Eventually people will catch on that memory is critical for dealing with Big Data and it s no longer an issue of reliability or cost Without disk rotational latency in the way and poor random access we can push the limits of our compute assets while leveraging the network for scale Eventually we might see a fusion of in memory data grids with network in a way which allows us to deal with permutation traffic patterns by changing the dynamics of networking processing

    Original URL path: http://garyberger.net/?p=527 (2016-04-25)
    Open archived version from archive



  •