Big data

Three trends for CIOs in 2019: Bob Gault, Extreme

Posted on

Extreme Networks is focused on customer-driven networking to improve transformation, innovation, and customer experience – from the enterprise edge to the cloud – with software driven-solutions that are agile, adaptive, and secure. Now, it has announced the path to the new, agile data center. How is it different from the biggies?

Bob Gault.

Bob Gault, Chief Revenue and Services Officer, Extreme Networks, said: “Most vendors today sell closed technology stacks with domain-level visibility. This makes it difficult for customers to adopt new, software-driven approaches that drive the business forward, or to test new technologies that aren’t driven by the vendor they have aligned with.

“In contrast, Extreme provides technology that works in a multi-vendor heterogeneous environment, eliminating vendor lock-in. We deliver real, multi-vendor capabilities that meet the needs of the modern enterprise. For example, the Extreme Management Center allows for full visibility and management of multi-vendor networks, and Extreme Workflow Composer enables cross-domain, multi-vendor IT automation that allows organizations to automate at their pace—from automating simple tasks to deploying sophisticated workflows.

“Further, our standards-based, multi-vendor interoperable and adaptable data enter fabric gives customers the ability to build once and re-use it many times. All of this gives organizations the ability to accelerate their digital transformation initiatives, and to adapt and respond to new service demands with cloud speed.”

Enabling digital transformation
In that case, how does Extreme Networks enable digital transformation? He added: “According to a recent study, 89 percent of enterprises worldwide either plan to adopt, or have already adopted, a digital-first business strategy. The key enabler of digital transformation is an organization’s network infrastructure.

“With the advent of IoT, pervasive mobility and growing cloud service adoption, the network has become increasingly distributed. As such, Extreme collaborates with our customers to build open, software-driven networking solutions from the enterprise edge to the cloud that are agile, adaptive, and secure to enable digital transformation.

“We are a group of dedicated professionals who are passionate about helping our customers – and each other – succeed. Our 100% in-sourced services and support are #1 in the industry and even with 30,000 customers globally, including half of the Fortune 50, we remain nimble and responsive to ensure customer and partner success. We call this Customer-Driven Networking.”

There are three core drivers are: user experience, data, and insights, and the foundation to keep all that data secure. I requested Gault to elaborate.

He said: “On top of providing the network building blocks for wired and wireless LAN access (routers, switches, access points, etc.), Extreme offers an array of software capabilities including analytics, artificial intelligence and machine learning to help customers gather granular insights into who is using what application, when, and where.

“With that data, customers can understand usage patterns to optimize applications, do capacity planning and fine-tune the infrastructure for optimal performance. By applying machine learning to the data, Extreme’s analytics can detect anomalies from devices and applications, and block potentially malicious access. Collectively, these capabilities allow organizations to deliver a better customer experience via personalized offers and engagement based on user behavior and with maximum security, network uptime and greater throughput.

“Extreme offers the industry’s only end-to-end, single pane of glass solution that enables customers to accelerate digitization while saving IT operations cost with automation, visibility, analytics, and control. Our solution helps secure our customers’ networks and ensure exceptional user experiences with fast Mean Time to Innocence, application performance insights, security and forensics, and automated roll-out of consistent policies.”

Advertisements

Disruptive innovation with Xilinx Versal ACAP super FPGA

Posted on Updated on

Xilinx Inc. recently announced Versal – the super FPGA. Versal is said to be the first adaptive compute acceleration platform (ACAP), a fully software-programmable, heterogeneous, compute platform.

Versal ACAP combines scalar engines, adaptable engines, and intelligent engines to achieve dramatic performance improvements of up to 20X over today’s fastest FPGA implementations, and over 100X over today’s fastest CPU implementations—for data center, wired network, 5G wireless, and automotive driver-assist applications.

Versal ACAP
Versal is the first ACAP by Xilinx. What exactly is an ACAP? For which applications does it work best?

Victor Peng

Victor Peng, president and CEO, Xilinx, said: “An ACAP is a heterogeneous, hardware adaptable platform that is built from the ground up to be fully software programmable. An ACAP is fundamentally different from any multi-core architecture as it provides hardware programmability, but, the developer does not have to understand any of the hardware detail.

“From a software standpoint, it includes tools, libraries, run-time stacks and everything that you’d expect from a modern software-driven product. The tool chain, however, takes into account every type of developer—from the hardware developer, to embedded developer, to data scientist, and to framework developer.

Differences from classic FPGA and SoC
Now, that means there are technical differences in the Versal from a classic FPGA and to an SoC.

He said: “A Versal ACAP is significantly different than a regular FPGA or SoC. Zero hardware expertise is required to boot the device. Developers can connect to a host via CCIX or PCIe and get memory-mapped access to all peripherals (e.g., AI engines, DDR memory controllers).

“The Network-on-Chip is at the heart of what makes this possible. It provides ease-of-use, and makes the ACAP inherently SW programmable—available at boot and without any traditional FPGA place-and-route or bit stream. No programmable logic experience is required to get started, but designers can design their own IP or add from the large Xilinx ecosystem.

“With regard to Xilinx’s hardware programmable SoCs (Zynq-7000 and Zynq UltraScale+ SoCs), the Zynq platform partially integrated two out of the three engine types (Scalar Engines and Adaptable Hardware Engines).

“Versal devices add a third engine type (intelligent engines). More importantly, the ACAP architecture tightly couples them together, via the Network on Chip (NOC) to enable each engine type to deliver 2-3x the computational efficiency of a single engine architecture, such as a SIMT GPU.”

Does this mean that Xilinx will address, besides the classic hardware designers, the application engineers in the future?

He noted: “Xilinx has been addressing software developers with design abstraction tools as well as its hardware programmable SoC devices (Zynq-7000 and Zynq UltraScale+) for multiple generations. However, with ACAP, software programmability is inherently designed into the architecture itself for the entire platform, including its hardware adaptable engines and peripherals.”

Global semiconductor industry trends 2019: Jaswinder Ahuja, Cadence

Posted on Updated on

Today happens to be my birthday! 😉 And, what better way to celebrate, with a discussion on the global semiconductor industry and the expected trends for 2019.

I caught up with my good friend, Jaswinder Ahuja, Corporate VP & MD of Cadence Design Systems India Pvt Ltd, and asked him about the global semiconductor industry trends for 2019. So, how is the global semicon industry performing this year? How does Cadence see it going in 2019?

Jaswinder Ahuja, Cadence.

Global semicon industry trends
Jaswinder Ahuja said: “The semiconductor industry is doing very well. Estimates say that it has crossed $400 billion in revenue. This growth is being driven by four or five waves that have emerged over the last couple of years. These are:

  • Cloud and data center applications are booming, and the top names in this space, including, Amazon and Google are now designing their own chips.
  • Automotive is (and has been) going through a transformation over the last few years. ADAS is just the beginning. From infotainment to safety, the whole vehicle is driven on precision electronics.
  • Industrial IoT is another wave. By incorporating artificial intelligence (AI) to manufacturing and industrial processes, we are looking at a revolution—what is being called Industry 4.0.
  • Mobile and wireless have, of course, driven growth in the last decade to decade-and-a-half, and it doesn’t show any signs of slowing down.
  • Consumer and IoT devices can also be considered a wave, although the consumer wave started some time ago. IoT is the game-changer there, with so many billion connected devices being forecasted in the next 5-10 years.

“Thanks to these technology waves, our sense is that the growth will continue into 2019, and probably beyond, especially as AI and ML become more prevalent across applications.”

Global EDA and memory industries
How is the global EDA industry performing this year? How do you see it going next year in 2019?

He said: “Cadence has seen strong results in 2018 so far across product lines. This is thanks to multiple technology waves, especially machine learning, that are driving increased design activity and our System Design Enablement strategy, as well as our continued focus on innovation and launching new products.”

And, what’s the road ahead for memory? Is memory attracting more investment?

He added: “The memory market is being driven by the data-driven economy, and the need to store and process data at the edge and in the cloud. Added to that is the huge demand for smart and connected devices, for which memory is crucial.

“There isn’t any data about investments, but keeping in mind the consolidation that is happening across the industry, it could well be that we may witness some industry M&A activity with memory companies as well. The merger of SanDisk and Western Digital is one such example.”

EUV lithography trends
Has EUV lithography progressed? By when is EUV lithography likely to get mainstream?

Ahuja noted: “As technology advances, both manufacturing and design complexity grow. Designs are being scaled down to meet the ever-increasing demand for more functionality contained in a single chip, creating unique implementation challenges.

“Manufacturing is facing huge challenges in terms of printability, manufacturability, yield ramp-up and variability. Unfortunately, restrictions on power, performance and area (PPA) or turnaround time (TAT) do not scale up along with these factors.

“Foundries have been talking about EUV for years now. However, the power and performance improvements with EUV don’t look very significant at this time. Clearly, there is still some distance to go, before EUV will become mainstream.

“On a related note, in February 2018, Cadence and imec, the world-leading research and innovation hub in nanoelectronics and digital technologies, announced that its extensive, long-standing collaboration had resulted in the industry’s first 3nm test chip tapeout.

“The tapeout project, geared toward advancing 3nm chip design, was completed using EUV and 193 immersion (193i) lithography-oriented design rules, and the Cadence Innovus Implementation System and Genus Synthesis Solution.”

Trends in power and verification
Finally, what is the latest regarding coverage and power across all the aspects of verification?

He said: “Over the past decade, verification complexity and demands on engineering teams have continued to rise rapidly. Applying innovative solution flows, automation tools, and best-in-class verification engines is necessary to overcome the resulting verification gap.

“With regard to verification coverage, the challenge is always to know when you are done (the process of verification signoff). Cadence has a unique methodology and technology for measuring and signing off on the design and verification metrics used during the many milestones typical in any integrated circuit (IC) development, and it is called Metric Driven Verification (MDV).

“While milestones and metrics vary by design type and end-application, the final verification signoff will, at a minimum, contain the criteria and metrics within a flexible, human-readable and user-defined organizational structure. Automated data collection, project tracking, dashboards and in-depth report techniques are mandatory elements to eliminate subjectivity, allowing engineers to spend more time on verification and less time manually collecting and organizing data.

“Power-optimization techniques are creating new complexities in the physical and functional behavior of electronic designs. An integral piece of a functional verification plan, Cadence’s power-aware verification methodology can help verify power optimization without impacting design intent, minimizing late-cycle errors and debugging cycles. After all, simulating without power intent is like simulation with some RTL code black-boxed.

“The methodology brings together power-aware elaboration with formal analysis and simulation. With power-aware elaboration, all of the blocks as well as the power management features in the design are in place, so design verification with power intent is possible. Power intent introduces power/ground nets, voltage levels, power switches, isolation cells, and state retention registers. Any verification technology—simulation, emulation, prototyping, or formal—can be applied on a power-aware elaboration of the design.”

AddOn Networks disrupts status quo in optical transceiver market

Posted on

Data centers are set to be the driving force behind increasing optical transceiver sales, which are set to reach $6.87 billion by 2022. AddOn provides fully-functional, readily-deployable, fully-tested, compatible transceivers that outperform and outlast OEM equivalents – at a fraction of cost.

AddOn Networks, based in Tustin, California, USA, has launched a customer-first focus, state-of-the-art $20 million programming and testing lab. The testing lab enables AddOn to offer the proposition of transceivers guaranteed for functionality and performance.

So, how does AddOn deliver optical transceivers that meet and exceed OEM needs?

AddOnA company spokesperson said: “Since the OEMs may either batch test their transceiver products or not test them at all, customers are forced to adopt a ‘hope-and-pray’ stance when it comes to installing and using these critical components.

“Add On tests every single part we send out. Most OEMs do batch testing or they just test one or two parts in a lot. We also go the extra mile and test that part in what we call “intended-use” or in-environment testing.

“We ask our customers to verify what switches these transceivers will be plugged into. We mimic that in our lab and make sure that that transceiver is performing in the exact set up the customer will be using it for. When purchasing from AddOn, customers are assured that their new transceivers have been individually tested and serialized/MSA-compliant.”

How is AddOn providing cost-effective transceivers that put full optical deployment within the reach of more data centers? Aren’t there others?

He added: “Basically, we’re removing barriers to adopt optical transceivers and high-speed cabling, by providing trusted, tested and independent solutions that challenge current pricing models. We provide fully-functional, compatible transceivers that outperform and outlast OEM equivalents – at a fraction of the cost.

“These cost savings allows our customers to stretch their IT budgets beyond what they thought was possible. There are others, but they don’t offer the quality that we do. There’s still a concern about performance from a third-party optics company. We’ve taken that concern away with our process and testing.”

How does AddOn’s testing lab enable it to offer proposition of transceivers that are guaranteed for functionality and performance?

He said: “We’re fully aware that true quality and reliability come with a cost, and we’ve put our money where our mouth is here: in the form of a new, $20 million, state-of-the-art programming and testing lab. Here, we are able to test to specifications, OEM standards and within the intended environment.

“This attention to detail has led our products to achieve an industry-leading success rate of 99.98 percent for all transceivers shipped – compared to an average of 85 percent from competitors. This is critical, because small differences in failure rates create a large probability of failure at high unit volumes during deployment. Transceiver failure creates significant downtime and can halt expensive deployments. This is mostly due to our stringent testing process and Data Traveler process (programming and serialization).

Finally, what is the industry-leading Data Traveler process for optical transceivers?

He noted: “It’s our proprietary tracking system that creates a living manifest and allows us to uniquely serialize, track and ship every part. AddOn’s Data Traveler process tracks every part through unique serialization, programming, testing, labeling, boxing and shipping. We’ve set the blueprint for how to program, test and ship product at a high volume without sacrificing quality or performance.”

Helix’s MxC 200 DC-DC power IC increases efficiency at data centers

Posted on Updated on

Fabless power semiconductor company, Helix Semiconductors, announced that Agility Power Systems (Agility) is using the MxC 200 IC for its innovative, 1kW, high-efficiency 48VDC to 12VDC power converter. Agility designs highly efficient switched capacitor power conversion devices targeted at the data center, solar and electric vehicle markets.

As data storage capacity grows exponentially, as does the need for highly-efficient data center hardware and infrastructure.

Jason Young, president and CEO, Agility Power Systems, said: “Earlier this year, Agility Power Systems used the MuxCapacitor technology to create a 1kW 48V to 12V power converter with 97.6 percent peak efficiency using discrete components. This proof of concept unit was first demonstrated at the Helix Semiconductors booth at APEC in early March.

“Agility is now launching a new smaller, more cost effective and more functional version of that converter by integrating Helix’s MxC200 ASIC into the design in a way that amplifies the benefits of the already industry leading efficiency and power density characteristics of the MxC200.”

How is the MxC 200 DC-DC power IC bringing increased efficiency at data centers?

Bud Courville, VP of Business Development, Helix Semiconductors, said: “Our patented MuxCapacitor technology has a higher peak efficiency and maintains that efficiency across a much greater portion of the load curve when compared to traditional magnetic based power conversion devices used in data centers.
Helix1“This feature creates higher operating efficiency and reduced heat generation across a wider range of applications than traditional power converters. Exact sizing of the power conversion device to the application’s specific load becomes less critical when near peak efficiency is maintained through a wider range.

By how much is the financial benefit by reduced cooling costs due to lower heat generation?

To this, he added: “It depends on the Power Utilization Effectiveness (PUE) of the data center and the cost per watt at each facility. Here is a brief definition and description of the PUE.

“Power usage effectiveness (PUE) is a metric used to determine the energy efficiency of a data center. PUE is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it. PUE is, therefore, expressed as a ratio, with overall efficiency improving as the quotient decreases toward 1.”

PUE was created by members of the Green Grid, an industry group focused on data center energy efficiency. Data center infrastructure efficiency (DCIE) is the reciprocal of PUE and is expressed as a percentage that improves as it approaches 100 percent.
Helix2
He said: “While PUE varies from data center to data center, recent studies indicate that the typical data center has an average PUE of around 1.7. This means that for every 1.7 watts in at the utility meter, only one watt is delivered out to the IT load.

“For every watt saved in operating efficiency at the point of load, 1.7 watts worth of energy costs are saved. In a 1kW power conversion device that would mean that an efficiency improvement of 5 percent would equate to a point of load savings of 50 watts and a total energy savings of 85 watts. The cost savings of this reduction in overall energy usage adds up quickly at data centers consuming large amounts of power 24 hours a day.”

How has the bidirectional nature of Helix MuxCapacitor enabled new design configuration?

Courville said: “MuxCapacitor technology can be configured to operate as either a voltage step down or step up device within the same circuit. This makes it ideal for solar, EV and “Prosumer” renewable energy applications where power can be both drawn from or added to the grid or battery storage.”

Finally, what are the other MxC 200’s game-changing features and benefits in large power applications?

Courville said: “There are many features and benefits of the MxC 200 that improve performance in large power usage applications. The most pronounced benefit by far is the significant cost savings that results from improved efficiency both at peak load conditions and across the broader load curve.

“This cost savings comes both from a reduction in power consumed to operate the load and power consumed to temperature control the environment. For a smaller data center facility with a PUE of 2.0, the cost savings is double that of the savings from the reduction in energy consumed to drive the load.

“The power density of the MxC 200 is another key feature. In addition to reducing heat and cost through higher efficiency, the MxC 200 can also reduce the weight and size required for a power conversion device.

“The MxC 200 also has multiple output voltage settings. For Agility’s 48V input device, this feature would allow for output voltages of 24V or 6V in addition to the primary 12V output. The bidirectional nature of MuxCapacitor technology makes it ideal for certain applications.”

DVC provides fantastic opportunity: NetApp

Posted on Updated on

NetApp has introduced the Data Visionary Engineering Center (DVC) in Bangalore. Paul van Linden, manager, EMEA and APAC EBC Program, said that as of now, there are four DVCs: in Sunnyvale and RTP North Carolina, USA, Amsterdam, the Netherlands, and now, Bangalore.

Having a DVC does make a difference. Linden said: “Partners are hugely important. In a 2017 APBM survey, 86 percent said their purchase size increased due to visit. 30 percent said that NetApp is a trusted advisor. 42 percent said that their sales cycle had reduced (by up to 9 percent). And, 79 percent said that they discovered additional products (gone up by 15 percent).” He added, “We provide proven business acceleration.”

On the question of why have a DVC in Bangalore, he said: “Global customers have some very unique requirements. Eg., they would like to have detailed conversations with coders. This (DVC) is a fantastic opportunity.”
NetApp
Anil Valluri, president, Sales, India and SAARC, said: “It is a recognition of two things – one, the vibrancy of the market, and two, the huge amount of engineering talent in India. There are a lot of services being launched by the government. There is a growing market, with a lot of cutting-edge technology. We can tell people how to embrace digital transformation.

“The global SIs architecture centers are here. They can come here, and use technologies. It is a recognition of the potential of the Indian market. We can also serve as the knowledge center.”

Deepak Vishweswaraiah, MD and SVP, Data Fabric and Manageability Group, noted: “The whole digital transformation is not unique to NetApp. We are helping customers to progress on their data journey visions. Customers need to find new ways to do business. They have to find newer customers and newer ways to do business.

“We are also introducing the NetApp Cloud Volumes for Google Cloud Platform (GCP). We are now delivering data services with all the world’s largest hyper-scalers, such as Azure, AWS and Google Cloud Platform.

“We have modernized the IT architecture with Cloud Connected Flash. Powerful AI and high-performance applications with the world’s fastest enterprise all-flash array, the AFF A800 end-to-end NVMe.

“The NetApp ONTAP 9.4 storage OS improves performance, efficiency and data protection, also providing the industry’s first enterprise 30TB SSDs. It enables GDPR compliance and secures the data. New, intelligent cloud services further reduce TCO. The Active IQ provides insights for higher operational efficiency.

“We have also announced the NetApp Cloud Insights – Hybrid Cloud ITIM, delivered via SaaS. It improves customer satisfaction, pro-actively prevents failures, and optimizes to reduce cost. We have automated the tamper-proof retention of critical financial data.

“We are now accelerating our data visionary footprint in India. We have the largest R&D teams for NetApp in India.”

Xilinx’s vision for adaptable, intelligent world!

Posted on Updated on

PengVictor Peng, president and CEO of Xilinx Inc. unveiled his vision and strategy to enable the “adaptable, intelligent world.” Xilinx moves beyond the FPGA to deliver a completely new category of highly flexible and adaptive processors and platforms that will allow for rapid innovation across a wide array of technologies.

Peng’s strategy involves three key points:
* Emphasis on data center acceleration.
* Accelerating growth in core markets.
* Introducing the Adaptive Compute Acceleration Platform (ACAP).

Let’s find out what’s the new innovation around data center acceleration.

Peng said: ” In a data center, there are three areas that need to be in acceleration — compute, storage, and network. Xilinx already provides FPGA-based acceleration solutions for storage and network. A recent major trend is compute. Many data center users would like to use compute resource for a broad set of applications in the emerging era of Big Data and artificial intelligence, like video transcoding, database, data compression, search, AI inference, genomics, machine vision, etc.

“These applications are not fit for the CPU architecture. So, markets need to have an application-specific acceleration solution.”

Next, how is Xilinx looking to accelerating growth in core markets? Peng added: “In the core market, it is not direct-related acceleration. All Core markets that Xilinx has highlighted are important for our current and future businesses. Xilinx keeps investing in these areas as well. Of course, these applications will use cloud/data center for their businesses. Xilinx Acceleration solution also helps them to provide adaptable compute acceleration platform.

Lastly, what is the Adaptive Compute Acceleration Platform (ACAP), and the range of applications and workloads for ACAP!

Peng noted, “ACAP will cover a broad set of applications in the emerging era of big data and artificial intelligence, like video transcoding, database, data compression, search, AI inference, genomics, machine vision, etc.”

As for the outlook for the global semiconductor industry in 2018, Xilinx declined to comment. However, some analysts would have an opinion.