Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - IoT and Hardware

119 Articles
article-image-ros-melodic-morenia-released
Gebin George
28 May 2018
2 min read
Save for later

ROS Melodic Morenia released

Gebin George
28 May 2018
2 min read
ROS is nothing but a middleware with a set of tools and software frameworks for building and stimulating robots. ROS follows a stable release cycle, coming with a new version every year on 23rd of May. ROS released its Melodic Morenia version this year on the said date, with a decent number of enhancements and upgrades. Following are the release notes: class_loader header deprecation class_loader’s headers has been renamed and the previous ones have been deprecated in an effort to bring them close to multi-platform support and its ROS 2 counterpart. You can refer to the migration script provided for the header replacements and PRs will be released for all the .packages in previous ROS distribution. Kdl_parser package enhancement Kdl_parser has now deprecated a method that was linked with tinyxml (which was already deprecated) The tinyxml replacement code is as follows: bool treeFromXml(const tinyxml2::XMLDocument * xml_doc, KDL::Tree & tree) The deprecated API will be removed in N-turle. OpenCV version update For standardization reason, the OpenCV usage version is restricted to 3.2. Enhancements in pluginlib Similar to class_loader, the headers were deprecated here as well, to bring them closer to multi-platform support. plugin_tool which was deprecated for years, has been finally removed in this version. For more updates on the packages of ROS, refer to ROS Wiki page.
Read more
  • 0
  • 0
  • 19576

article-image-nvidias-new-turing-architecture-worlds-first-ray-tracing-gpu
Fatema Patrawala
14 Aug 2018
4 min read
Save for later

Nvidia unveils a new Turing architecture: “The world’s first ray tracing GPU”

Fatema Patrawala
14 Aug 2018
4 min read
The Siggraph 2018 Conference brought in the biggest announcements from Nvidia unveiling a new turing architecture and three new pro-oriented workstation graphics cards in its Quadro family. This is the greatest leap for Nvidia since the introduction of the CUDA GPU in 2006. The Turing architecture features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing to enable real-time ray tracing. The two engines along with more powerful compute for simulation and enhanced rasterization will usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experience, amazing new effects powered by neural networks and fluid interactivity on highly complex models. The company also unveiled its initial Turing-based products - the NVIDIA® Quadro® RTX™ 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs. They are expected to revolutionize the work of approximately 50 million designers and artists across multiple industries. At the Annual Siggraph conference, Jensen Huang, founder and CEO, Nvidia mentions, “Turing is NVIDIA’s most important innovation in computer graphics in more than a decade. Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry.” Here’s the list of Turing architecture features in detail. Real-Time Ray Tracing Accelerated by RT Cores The Turing architecture is armed with dedicated ray-tracing processors called RT Cores. It will accelerate the computation similar to light and sound travel in 3D environments at up to 10 GigaRays a second. Turing will accelerate real-time ray tracing operations by up to 25x than that of the previous Pascal generation. GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes. AI Accelerated by powerful Tensor Cores The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second. It will power AI-enhanced features for creating applications with new capabilities including DLAA (deep learning anti-aliasing). DLAA is a breakthrough in high-quality motion image generation for denoising, resolution scaling and video re-timing. These features are part of the NVIDIA NGX™ software development kit, a new deep learning-powered technology stack. It will enable developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks Faster Simulation and Rasterization with New Turing Streaming Multiprocessor A new streaming multiprocessor architecture is featured in the new Turing-based GPUs to add an integer execution unit, that will execute in parallel with the floating point datapath. A new unified cache architecture with double bandwidth of the previous generation is added too. As it is combined with new graphics technologies like variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second. Developers will be able to take advantage of NVIDIA’s CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environment and special effects. The new Turing architecture has already received support from companies like Adobe, Pixar, Siemens, Black Magic, Weta Digital, Epic Games and Autodesk. The new Quadro RTX is priced at $2,300 for a 16GB version and $6,300 for 24GB version. Double the memory to 48GB and Nvidia expects you to pay about $10,000 for the high-end card. For more information you may visit the Nvidia official blog page. IoT project: Design a Multi-Robot Cooperation model with Swarm Intelligence [Tutorial] Amazon Echo vs Google Home: Next-gen IoT war 5 DIY IoT projects you can build under $50
Read more
  • 0
  • 0
  • 19196

article-image-ieee-computer-society-predicts-top-ten-tech-trends-for-2019-assisted-transportation-chatbots-and-deep-learning-accelerators-among-others
Natasha Mathur
21 Dec 2018
5 min read
Save for later

IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others

Natasha Mathur
21 Dec 2018
5 min read
IEEE Computer Society (IEEE-CS) released its annual tech future predictions, earlier this week, unveiling the top ten most likely to be adopted technology trends in 2019. "The Computer Society's predictions are based on an in-depth analysis by a team of leading technology experts, identify top technologies that have substantial potential to disrupt the market in the year 2019," mentions Hironori Kasahara, IEEE Computer Society President. Let’s have a look at their top 10 technology trends predicted to reach wide adoption in 2019. Top ten trends for 2019 Deep learning accelerators According to IEEE computer society, 2019 will see widescale adoption of companies designing their own deep learning accelerators such as GPUs, FPGAs, and TPUs, which can be used in data centers. The development of these accelerators would further allow machine learning to be used in different IoT devices and appliances. Assisted transportation Another trend predicted for 2019 is the adoption of assisted transportation which is already paving the way for fully autonomous vehicles. Although the future of fully autonomous vehicles is not entirely here, the self-driving tech saw a booming year in 2018. For instance, AWS introduced DeepRacer, a self-driving race car, Tesla is building its own AI hardware for self-driving cars, Alphabet’s Waymo will be launching the world’s first commercial self-driving cars in upcoming months, and so on. Other than self-driving, assisted transportation is also highly dependent on deep learning accelerators for video recognition. The Internet of Bodies (IoB) As per the IEEE computer society, consumers have become very comfortable with self-monitoring using external devices like fitness trackers and smart glasses. With digital pills now entering the mainstream medicine, the body-attached, implantable, and embedded IoB devices provide richer data that enable development of unique applications. However, IEEE mentions that this tech also brings along with it the concerns related to security, privacy, physical harm, and abuse. Social credit algorithms Facial recognition tech was in the spotlight in 2018. For instance, Microsoft President- Brad Smith requested governments to regulate the evolution of facial recognition technology this month, Google patented a new facial recognition system that uses your social network to identify you, and so on.  According to the IEEE, social credit algorithms will now see a rise in adoption in 2019. Social credit algorithms make use of facial recognition and other advanced biometrics that help identify a person and retrieve data about them from digital platforms. This helps them check the approval or denial of access to consumer products and services. Advanced (smart) materials and devices IEEE computer society predicts that in 2019, advanced materials and devices for sensors, actuators, and wireless communications will see widespread adoption. These materials include tunable glass, smart paper, and ingestible transmitters, will lead to the development of applications in healthcare, packaging, and other appliances.   “These technologies will also advance pervasive, ubiquitous, and immersive computing, such as the recent announcement of a cellular phone with a foldable screen. The use of such technologies will have a large impact on the way we perceive IoT devices and will lead to new usage models”, mentions the IEEE computer society. Active security protection From data breaches ( Facebook, Google, Quora, Cathay Pacific, etc) to cyber attacks, 2018 saw many security-related incidents. 2019 will now see a new generation of security mechanisms that use an active approach to fight against these security-related accidents. These would involve hooks that can be activated when new types of attacks are exposed and machine-learning mechanisms that can help identify sophisticated attacks. Virtual reality (VR) and augmented reality (AR) Packt’s 2018 Skill Up report highlighted what game developers feel about the VR world. A whopping 86% of respondents replied with ‘Yes, VR is here to stay’. IEEE Computer Society echoes that thought as it believes that VR and AR technologies will see even greater widescale adoption and will prove to be very useful for education, engineering, and other fields in 2019. IEEE believes that now that there are advertisements for VR headsets that appear during prime-time television programs, VR/AR will see widescale adoption in 2019. Chatbots 2019 will also see an expansion in the development of chatbot applications. Chatbots are used quite frequently for basic customer service on social networking hubs. They’re also used in operating systems as intelligent virtual assistants. Chatbots will also find its applications in interaction with cognitively impaired children for therapeutic support. “We have recently witnessed the use of chatbots as personal assistants capable of machine-to-machine communications as well. In fact, chatbots mimic humans so well that some countries are considering requiring chatbots to disclose that they are not human”, mentions IEEE.   Automated voice spam (robocall) prevention IEEE predicts that the automated voice spam prevention technology will see widespread adoption in 2019. It will be able to block a spoofed caller ID and in turn enable “questionable calls” where the computer will ask questions to the caller for determining if the caller is legitimate. Technology for humanity (specifically machine learning) IEEE predicts an increase in the adoption rate of tech for humanity. Advances in IoT and edge computing are the leading factors driving the adoption of this technology. Other events such as fires and bridge collapses are further creating the urgency to adopt these monitoring technologies in forests and smart roads. "The technical community depends on the Computer Society as the source of technology IP, trends, and information. IEEE-CS predictions represent our commitment to keeping our community prepared for the technological landscape of the future,” says the IEEE Computer Society. For more information, check out the official IEEE Computer Society announcement. Key trends in software development in 2019: cloud native and the shrinking stack Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity Quantum computing, edge analytics, and meta learning: key trends in data science and big data in 2019
Read more
  • 0
  • 0
  • 18833

article-image-raspberry-pi-opens-its-first-offline-store-in-england
Prasad Ramesh
08 Feb 2019
2 min read
Save for later

Raspberry Pi opens its first offline store in England

Prasad Ramesh
08 Feb 2019
2 min read
Raspberry Pi has opened a retail brick and mortar store in Cambridge, England. The mini computer maker has always sold its products online and ships to many countries. This offline store is a first for the company. Located at the Grand Arcade shopping, the Raspberry Pi store was started yesterday. It is not just a boring store with Raspberry Pi boards. Their collection includes boards, full setups with monitors, keyboards and mouses for demo, books, mugs and even soft toys with Raspberry branding. You can see some of the pictures of the new store here: https://twitter.com/Raspberry_Pi/status/1093454153534398464 A user shared his observation of the store on HackerNews: “I had a minute to check it out over lunch - most of the floorspace is dedicated to demonstrating what the raspberry pi can do at a high level. They had stations for coding, gaming, sensors, etc. but only ~1/4th of the space was devoted to inventory. They have a decent selection of Pis, sensor kits, and accessories. Not everyone working there was technical. This is definitely aimed at the general public.” Raspberry Pi has a strong online community with people coming up with various DIY projects. But the community is limited to people who have a keen interest on. More stores like this will help familiarize more people with Raspberry Pi. With branded books, demos, and toys this store is aimed to popularize the mini computer. Introducing Strato Pi: An industrial Raspberry Pi Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25 Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV
Read more
  • 0
  • 0
  • 18772

article-image-introducing-raspberry-pi-tv-hat-a-new-addon-that-lets-you-stream-live-tv
Prasad Ramesh
19 Oct 2018
2 min read
Save for later

Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV

Prasad Ramesh
19 Oct 2018
2 min read
Yesterday the Raspberry Pi Foundation launched a new device called the Raspberry Pi TV HAT. It is a small board, TV antenna that lets you decode and stream live TV. The TV HAT is roughly the size of a Raspberry Pi Zero board. It connects to the Raspberry Pi via a GPIO connector and has a port for a TV antenna connector. The new Raspberry Pi addon is designed after a new form factor of HAT (Hardware Attached on Top). The addon itself is a half-sized HAT matching the outline of Raspberry Pi Zero boards. Source: Raspberry Pi website TV HAT specifications and requirement The board addon has a Sony CXD2880 TV tuner. It supports TV standards like DVB-T2 (1.7MHz, 5MHz, 6MHz, 7MHz, 8MHz channel bandwidth), and DVB-T (5MHz, 6MHz, 7MHz, 8MHz channel bandwidth). The frequencies it can recieve are VHF III, UHF IV, and UHF V. Raspbian Stretch (or later) is required for using the Raspberry Pi TV HAT. TVHeadend is the recommended software to start with TV streams. There is a ‘Getting Started’ guide on the Raspberry Pi website. Watch on the Raspberry Pi With the TV HAT can receive and you can view television on a Raspberry Pi board. The Pi can also be used as a server to stream television over a network to other devices. When running as a server the TV HAT works with all 40-pin GPIO Raspberry Pi boards. Watching on TV on the Pi itself needs more processing, so the use of a Pi 2, 3, or 3B+ is recommended. The TV HAT connected to a Raspberry Pi board: Source: Raspberry Pi website Streaming over a network Connecting a TV HAT to your network allows viewing streams on any device connected to the network. This includes computers, smartphones, and tablets. Initially, it will be available only in Europe. The Raspberry Pi TV HAT is now on sale for $21.50, visit the Raspberry Pi website for more details. Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts How to secure your Raspberry Pi board [Tutorial] Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 18770

article-image-samsung-develops-key-value-ssd-prototype-paving-the-way-for-optimizing-network-storage-efficiency-and-extending-server-cpu-processing-power
Savia Lobo
06 Sep 2019
4 min read
Save for later

Samsung develops Key-value SSD prototype, paving the way for optimizing network storage efficiency and extending server CPU processing power

Savia Lobo
06 Sep 2019
4 min read
Two days ago, Samsung announced a new prototype key-value Solid State Drive (SSD) that is compatible with industry-standard API for key-value storage devices. The Key value SSD prototype moves the storage workload from server CPUs into the SSD without any supportive device. This will simplify software programming and make more effective use of storage resources in IT applications. The new prototype features extensive scalability, improved durability, improved software efficiency, improved system-level performance, and reduced write amplification (WAF). Applications that are based on software-based KV stores will need to handle garbage collection using a method called compaction. However, this affects system performance as both the host CPU and SSD work to clear away the garbage. “By moving these operations to the SSD in a straightforward, standardized manner, KV SSDs will represent a major upgrade in the way that storage is accessed in the future,” the press release states. Garbage collection can be handled entirely in the SSD, freeing the CPU to handle the computational work. Hangu Sohn, Vice President of NAND Product Planning, Samsung Electronics, said in a press release, “Our KV SSD prototype is leading the industry into a new realm of standardized next-generation SSDs, one that we anticipate will go a long way in optimizing the efficiency of network storage and extending the processing power of the server CPUs to which they’re connected.” Also Read: Samsung speeds up on-device AI processing with a 4x lighter and 8x faster algorithm Samsung’s KV SSD prototype is based on a new open standard for a Key-Value Application Programming Interface (KV API) that was recently approved by Storage Networking Industry Association (SNIA). Michael Oros, SNIA Executive Director, said, “The SNIA KV API specification, which provides an industry-wide interface between an application and a Key Value SSD, paves the way for widespread industry adoption of a standardized KV API protocol.”  Hugo Patterson, Co-founder and Chief Scientist at Datrium said, “SNIA’s KV API is enabling a new generation of architectures for shared storage that is high-performance and scalable. Cloud object stores have shown the power of KV for scaling shared storage, but they fall short for data-intensive applications demanding low latency.” “The KV API has the potential to get the server out of the way in becoming the standard-bearer for data-intensive applications, and Samsung’s KV SSD is a groundbreaking step towards this future,” Patterson added. A user on Hacker News writes, “Would be interesting if this evolves into a full filesystem implementation in hardware (they talk about Object Drive but aren't focused on that yet). Some interesting future possibilities: - A cross-platform filesystem that you could read/write from Windows, macOS, Linux, iOS, Android etc. Imagine having a single disk that could boot any computer operating system without having to manage partitions and boot records! - Significantly improved filesystem performance as it's implemented in hardware. - Better guarantees of write flushing (as SSD can include RAM + tiny battery) that translate into higher level filesystem objects. You could say, writeFile(key, data, flush_full, completion) and receive a callback when the file is on disk. All independent of the OS or kernel version you're running on. - Native async support is a huge win Already the performance is looking insane. Would love to get away from the OS dictating filesystem choice and performance.” To know more about this news in detail, read the report on Samsung Key Value SSD. Other interesting news in Hardware Red Hat joins the RISC-V foundation as a Silver level member AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity
Read more
  • 0
  • 0
  • 18756
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-nsa-researchers-present-security-improvements-for-zephyr-and-fucshia-at-linux-security-summit-2018
Bhagyashree R
04 Sep 2018
5 min read
Save for later

NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 2018

Bhagyashree R
04 Sep 2018
5 min read
Last week, James Carter and Stephen Smalley presented the architecture and security mechanisms of two operating systems, Zephyr and Fuchsia at the Linux Security Summit 2018. James and Stephen are computer security researchers in the Information Assurance Research organization of the US National Security Agency (NSA). They discussed the current concerns in the operating systems and their contribution and others to further advance security of these emerging open source operating systems. They also compared the security features of Zephyr and Fucshia to Linux and Linux-based systems such as Android. Zephyr Zephyr is a scalable real-time operating system (RTOS) for IoT devices, supporting cross-architecture with security as the main focus. It targets devices that are resource constrained seeking to be a new "Linux" for little devices. Protection mechanisms in Zephyr Zephyr introduced basic hardware-enforced memory protections in the v1.8 release and these were officially supported in the v1.9 releases. The microcontrollers should either have a memory protection unit (MPU) or a memory management unit (MMU) to support these protection mechanisms. These mechanisms provide protection by the following ways: They enforce Read Only/No Execute (RO/NX) restrictions to protect the read-only data from tampering. Provides runtime support for stack depth overflow protections. The researchers’ contribution was to review the basic memory protections and also develop a set of kernel memory protection tests that were modeled after subset of lkdtm tests in Linux from KSPP. These tests were able to detect bugs and regression in Zephyr MPU drivers and are now a part of the standard regression testing that Zephyr performs on all future changes. Userspace support in Zephyr In previous versions, everything ran in a supervisor mode, so Zephyr introduced a userspace support in v1.10 and v1.11. This requires the basic memory protection support and MPU/MMU. It provides basic support for user mode threads with isolated memory. The researchers contribution, here, was to develop userspace tests to verify some of the security-relevant properties for user mode threads, confirm the correctness of x86 implementation, and validate initial ARM and ARC userspace implementations. App shared memory: A new feature contributed by the researchers Originally, Zephyr provided an access to all the user threads to the global variables of all applications. This imposed high burden on application developers to, Manually organize application global variable memory layout to meet (MPU-specific) size/alignment restrictions. Manually define and assign memory partitions and domains. To solve this problem, the researchers developed a new feature which will come out in v1.13 release, known as App Shared Memory, having features: It is a more developer-friendly way of grouping application globals based on desired protections. It automatically generates linker script, section markings, memory partition/domain structures. Provides helpers to ease application coding. Fucshia Fucshia is an open source microkernel-based operating system, primarily developed by Google. It is based on a new microkernel called Zircon and targets modern hardware such as phones and laptops. Security mechanisms in Fucshia Microkernel security primitives Regular handles: Through handles, userspace can access kernel objects. They can identify both the object and a set of access rights to the object. With proper rights, one can duplicate objects, pass them across IPC, and obtain handles to child objects. Some of the concerns pointed out in regular handles are: If you have a handle to a job, you can get handle to anything in the job using object_get_child() Leak of root job handle Refining default rights down to least privilege Not all operations check access rights Some rights are unimplemented, currently Resource handles: These are a variant of handles for platform resources such as, memory mapped I/O, I/O port, IRQ, and hypervisor guests. Some of the concerns pointed out in resource handles are: Coarse granularity of root resource checks Leak of root resource handle Refining root resource down to least privilege Job policy: In Fucshia, every process is a part of a job and these jobs can further have child jobs. Job policy is applied to all processes within the job. These policies include error handling behavior, object creation, and mapping of WX memory. Some of the concerns pointed out in job policies are: Write execute (WX) is not yet implemented Inflexible mechanism Refining job policies down to least privilege vDSO (virtual dynamic shared object) enforcement: This is the only way to invoke system calls and is fully read-only. Some of the concerns pointed out in vDSO enforcement are: Potential for tampering with or bypassing the vDSO, for example, processs_writes_memory() allows you to overwrite the vDSO Limited flexibility, for example,  as compared to seccomp Userspace mechanisms Namespaces: It is a collection of objects that you can enumerate and access. Sandboxing: Sandbox is the configuration of a process’s namespace created based on its manifest. Some of the concerns pointed out in namespaces and sandboxing are: Sandbox only for application packages (and not system services) Namespace and sandbox granularity No independent validation of sandbox configuration Currently uses global /data and /tmp To address the aforementioned concerns the researchers suggested a MAC framework. It could help in the following ways: Support finer-grained resource checks Validate namespace/sandbox It could help control propagation, support revocation, apply least privilege Just like in Android, it could provide a unified framework for defining, enforcing, and validating security goals for Fuchsia. This was a sneak peek from the talk. To know more about the architecture, hardware limitations, security features of Zephyr and Fucshia in detail, watch the presentation on YouTube: Security in Zephyr and Fucshia - Stephen Smalley & James Carter, National Security Agency. Cryptojacking is a growing cybersecurity threat, report warns Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation
Read more
  • 0
  • 0
  • 18153

article-image-what-if-buildings-of-the-future-could-compute-european-researchers-make-a-proposal
Prasad Ramesh
23 Nov 2018
3 min read
Save for later

What if buildings of the future could compute? European researchers make a proposal.

Prasad Ramesh
23 Nov 2018
3 min read
European researchers have proposed an idea for buildings that could compute. In the paper On buildings that compute. A proposal published this week, they have made proposals to integrate computation in various parts of a building, from cement and bricks to paint. What is the idea about? Smart homes today are made up of several individual smart appliances. They may work individually or be interconnected via a central hub. “What if intelligent matter of our surrounding could understand us humans?” The idea is that the walls of a building in addition to supporting the roof, had more functionality like sensing, calculating, communicating, and even producing power. Each brick/block could be thought of as a decentralized computing entity. These blocks could contribute to a large-scale parallel computation. This would transform a smart building into an intelligent computing unit in which people can live in and interact with. Such smart buildings that compute, as the researchers say can potentially offer protection from crime, natural disasters, structural damage within the building, or simply send a greeting to the residing people. When nanotechnology meets embedded computing The proposal involves using nanotechnology to embed computation and sensing directly to the construction materials. This includes intelligent concrete blocks and using stimuli-responsive smart paint. The photo sensitive paint would sense the internal and external environment. A nano-material infused concrete composition would sense the building environment to implement parallel information processing on a large scale. This will result in distributed decision making. The result is a building which can be seen as a huge parallel computer consisting of computing concrete blocks. The key concepts used for the idea of smart buildings that compute are functional nanoparticles which are photo-, chemo- and electro-sensitive. A range of electrical properties will span all the electronic elements mixed in a concrete. The concrete is used to make the building blocks which are equipped with processors. These processors gather information from distributed sensory elements, helps in decision making, location communication and enables advanced computing. The blocks together form a wall which forms a huge parallel array processor. They envision a single building or a small colony to turn into a large-scale universal computing unit.  This is an interesting idea, bizarre even. But the practicality of it is blurry. Can its applications justify the cost involved to create such a building? There is also a question of sustainability. How long will the building last before it has to be redeveloped? I for one think that doing so will almost certainly undo the computational aspect from it. For more details, read the research paper. Home Assistant: an open source Python home automation hub to rule all things smart The iRobot Roomba i7+ is a cleaning robot that maps and stores your house and also empties the trash automatically. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration
Read more
  • 0
  • 0
  • 18022

article-image-rigettis-128-qubit-chip-quantum-computer
Fatema Patrawala
16 Aug 2018
3 min read
Save for later

Rigetti plans to deploy 128 qubit chip Quantum computer

Fatema Patrawala
16 Aug 2018
3 min read
Rigetti computers are committed to building the world’s most powerful computers and they believe the true value of quantum will be unlocked by practical applications. Rigetti CEO Chad Rigetti, posted recently on Medium about their plans to deploy 128 qubit chip quantum computing system, challenging Google, IBM, and Intel for leadership in this emerging technology. They have planned to deploy this system in the next 12 months and shared their investment in resources at the application layer to encourage experimentation on quantum computers. Over the past year, Rigetti has built 8-qubit and 19-qubit superconducting quantum processors, which are accessible to users over the cloud through their open source software platform Forest. These chips have been useful in helping researchers around the globe to carry out and test programs on their quantum-classical hybrid computers. However, to drive practical use of quantum computing today, Rigetti must be able to scale and improve the performance of the chips and connect them to the electronics on which they run . To achieve this, the next phase of quantum computing will require more power at the hardware level to drive better results. Rigetti is in a unique position to solve this problem and build systems that scale. Chad Rigetti adds, “Our 128-qubit chip is developed on a new form factor that lends itself to rapid scaling. Because our in-house design, fab, software, and applications teams work closely together, we’re able to iterate and deploy new systems quickly. Our custom control electronics are designed specifically for hybrid quantum-classical computers, and we have begun integrating a 3D signaling architecture that will allow for truly scalable quantum chips. Over the next year, we’ll put these pieces together to bring more power to researchers and developers.” While they are focussed on building the 128 qubit chip, the Rigetti team is also looking at ways to enhance the application layer by pursuing quantum advantage in three areas; i.e. quantum simulation, optimization and machine learning. The team believes quantum advantage will be achieved by creating a solution that is faster, cheaper and of a better quality. They have posed an open question as to which industry will build the first commercially useful application to add tremendous value to researchers and businesses around the world. Read the full coverage on the Rigetti Medium post. Quantum Computing is poised to take a quantum leap with industries and governments on its side Q# 101: Getting to know the basics of Microsoft’s new quantum computing language PyCon US 2018 Highlights: Quantum computing, blockchains and serverless rule!
Read more
  • 0
  • 0
  • 18022

article-image-mit-researchers-built-a-16-bit-risc-v-compliant-microprocessor-from-carbon-nanotubes
Amrata Joshi
30 Aug 2019
5 min read
Save for later

MIT researchers built a 16-bit RISC-V compliant microprocessor from carbon nanotubes

Amrata Joshi
30 Aug 2019
5 min read
On Wednesday, MIT researchers published a paper on building a modern microprocessor from carbon nanotube transistors, a greener alternative to the traditional silicon counterparts. The MIT researchers used carbon nanotubes in order to make a general-purpose, RISC-V-compliant microprocessor that can handle 32-bit instructions and does 16-bit memory addressing.  Carbon nanotube naturally comes in semiconducting forms, exhibits electrical properties, and is extremely small. Carbon nanotube field-effect transistors (CNFET) have properties that can give greater speeds and around 10 times the energy efficiency as compared to silicon.  Co-author of this paper, Max M. Shulaker, the Emanuel E Landsman Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) and a member of the Microsystems Technology Laboratories, says, “This is by far the most advanced chip made from any emerging nanotechnology that is promising for high-performance and energy-efficient computing.” Shulaker further added,  “There are limits to silicon. If we want to continue to have gains in computing, carbon nanotubes represent one of the most promising ways to overcome those limits. [The paper] completely re-invents how we build chips with carbon nanotubes.” Limitations in carbon nanotubes and how the researchers addressed them According to the research paper, silicon exhibits additional properties as it can be easily doped but in the case of carbon nanotubes, they are small so it becomes difficult to dope them. Also, it is difficult to grow the nanotubes where they're needed and equally difficult to manipulate them or to place them in the right location. When carbon nanotubes are fabricated at scale, the transistors usually come with many defects that can affect the performance, so it becomes impractical to choose them.  To overcome these,  MIT researchers invented new techniques to limit the defects and provide full functional control in fabricating CNFETs with the help of the processes in traditional silicon chip foundries.  Firstly, the researchers made a silicon surface with metallic features that were large enough to let several nanotubes bridge the gaps between the metal.  Then they placed a layer of material on top of the nanotubes and used with sonication to get  rid of the aggregates. Though the material took the aggregates with it, it left the underlying layer of nanotubes without getting them disturbed. To limit nanotubes to where they were needed, the researchers etched off most of the layer of nanotubes and placed them where they were needed. They further added a variable layer of oxide on top of the nanotubes.  The researchers also demonstrated a 16-bit microprocessor with more than 14,000 CNFETs that performs the same tasks similar to commercial microprocessors.  Introduced DREAM technique to attain 99% of purity in carbon nanotubes Advanced circuits need to have carbon nanotubes with around 99.999999 percent purity for becoming robust to failures, which is nearly impossible. The researchers introduced a technique called DREAM (“designing resiliency against metallic CNTs”) that positions metallic CNFETs in a way that they don’t disrupt computing.  This way they relaxed the stringent purity requirement by around four orders of magnitude or 10,000 times, then they required carbon nanotubes at about 99.99 percent purity which was possible to attain. Developed RINSE for cleansing the contamination on the chip For CNFET fabrication, the carbon nanotubes are deposited in a solution onto a wafer along with predesigned transistor architectures. But in this process, carbon nanotubes stick randomly together to form big bundles that lead to the formation of contamination on the chip.   To cleanse contamination, the researchers developed RINSE ( “removal of incubated nanotubes through selective exfoliation”). In this process, the wafer is pretreated with an agent that promotes carbon nanotube adhesion. Later, the wafer is coated with a polymer and is then dipped in a special solvent. It washes away the polymer that carries away the big bundles and single carbon nanotubes remain stuck to the wafer. The RINSE technique can lead to about a 250-times reduction in particle density on the chip as compared to other similar methods. New chip design, RV16X-NANO handles 32 bit long instructions on RISC-V architecture The researchers built a new chip design and drew insights based on the chip. According to the insights, few logical functions were less sensitive to metallic nanotubes than the others. The researchers modified an open-source RISC design tool to take this information into account. It resulted in a chip design that had none of the gates being most sensitive to metallic carbon nanotubes.  Hence, the team named the chip as RV16X-NANO designed to handle the 32-bit-long instructions of the RISC-V architecture. They used more than 14,000 individual transistors for the RV16X-NANO, and every single one of those 14,000 gates did work as per the plan. The chip successfully executed a variant of the traditional "Hello World" program which is used as an introduction to the syntax of different programming languages. In this paper, researchers have focused on ways to improve  their existing design. But the design needs to tolerate metallic nanotubes as it will have multiple nanotubes in each transistor. The design needs to be such that few nanotubes in bad orientations wouldn’t leave enough space for others to form functional contacts.  Researchers major goal was to make single-nanotube transistors, which would require the ability to  control the location of their chip placement. This research proves that it is possible to integrate carbon nanotubes in the existing chipmaking processes, with additional electronics necessary for a processor to function. The researchers have started implementing their manufacturing techniques into a silicon chip foundry via a program by the DARPA (Defense Advanced Research Projects Agency). To know more about this research, check out the official paper published.  What’s new in IoT this week? Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications  
Read more
  • 0
  • 0
  • 17999
article-image-intel-releases-patches-to-add-linux-kernel-support-for-upcoming-dedicated-gpu-releases
Melisha Dsouza
18 Feb 2019
2 min read
Save for later

Intel releases patches to add Linux Kernel support for upcoming dedicated GPU releases

Melisha Dsouza
18 Feb 2019
2 min read
Last week, Intel released a big patch series to introduce the concept of memory regions to the Intel Linux graphics driver, which is being added to the Intel "i915" Linux kernel DRM driver. Intel stated that these patches were in “preparation for upcoming devices with device local memory”, without giving out any specific details of these “upcoming devices”. It was in December 2018, that Intel had made its plans clear that it's working on everything from integrated GPUs and discrete graphics for gaming to GPUs for data centers. Fast forward to 2019, Intel is now testing the drivers required to make them run. Phoronix was the first to speculate that this device-local memory was for Intel's discrete graphics cards with dedicated vRAM; expected to debut in 2020. Specifying their motivations behind the release of the new patches, Intel tweeted: https://twitter.com/IntelGraphics/status/1096537915222642689 Amongst other features once implemented, the patches will allow a system to: Have different "regions" of memory for system memory as for any device local memory (LMEM). Introduce a simple allocator and allow the existing GEM memory management code to allocate memory to different memory regions. Providing fake LMEM (local memory) regions to exercise a new code path. These patches will lay the groundwork for Linux support for the upcoming dedicated GPU’s. According to Phoronix’s Michael Larabel, "With past generations of Intel graphics, we generally see the first Linux kernel patches roughly a year or so out from the actual hardware debut." Twitter users have expressed enthusiasm towards this announcement: https://twitter.com/benjamimgois/status/1096544747597037571 https://twitter.com/ebound/status/1096498313392783360 You can head over to Freedesktop.org to have a look at these patches. Researchers prove that Intel SGX and TSX can hide malware from antivirus software Uber releases AresDB, a new GPU-powered real-time Analytics Engine TensorFlow team releases a developer preview of TensorFlow Lite with new mobile GPU backend support
Read more
  • 0
  • 0
  • 17993

article-image-librepcb-0-1-0-released-with-major-changes-in-library-editor-and-file-format
Amrata Joshi
03 Dec 2018
2 min read
Save for later

LibrePCB 0.1.0 released with major changes in library editor and file format

Amrata Joshi
03 Dec 2018
2 min read
Last week, the team at LibrePCB released LibrePCB 0.1.0., a free EDA (Electronic Design Automation) software used for developing printed circuit boards. Just three weeks ago, LibrePCB 0.1.0 RC2 was released with major changes in library manager, control panel, library editor, schematic editor and more. The key features of LibrePCB include, cross-platform (Unix/Linux, Mac OS X, Windows), all-in-one (project management, library/schematic/board editors) and intuitive, modern and easy-to-use graphical user interface. It also features powerful library designs and human-readable file formats. What’s new in LibrePCB 0.1.0 ? Library editor This new version saves library URL. LibrePCB 0.1.0 has come with improvements to saving of component property, schematic-only. File format stability Since this new release of LibrePCB  is a stable one, the file format is stable. The projects created with this version will be loadable with LibrePCB’s future releases. Users are comparing LibrePCB 0.1.0 with KiCad, a free open source EDA software for OSX, Linux and Windows, and they have questions as to which one is better. But many users think that LibrePcb 0.1.0 is better because the part libraries are managed well. Whereas, KiCad doesn’t have a coherent workflow for managing the part libraries. It is difficult to manage the parts like a schematic symbol, its footprint, its 3D model in KiCad. Read more about this news, in detail, on the LibrePCB blog. A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly How to secure your Raspberry Pi board [Tutorial] Nvidia unveils a new Turing architecture: “The world’s first ray tracing GPU”
Read more
  • 0
  • 0
  • 17766

article-image-amazon-freertos-adds-a-new-bluetooth-low-energy-support-feature
Natasha Mathur
27 Nov 2018
2 min read
Save for later

Amazon FreeRTOS adds a new ‘Bluetooth low energy support’ feature

Natasha Mathur
27 Nov 2018
2 min read
Amazon team announced a newly added ‘bluetooth low energy support’ (BLE) feature to its  Amazon FreeRTOS. Amazon FreeRTOS is an open source, free to download and use IoT operating system for microcontrollers makes it easy for you to program, deploy, secure, connect, and manage small, low powered devices. It extends the FreeRTOS kernel (a popular open source operating system for microcontrollers) using software libraries that make it easy for you to connect your small, low-power devices to AWS cloud services or to more powerful devices that run AWS IoT Greengrass, a software that helps extend the cloud capabilities to local devices. Amazon FreeRTOS With the helo of Amazon FreeRTOS, you can collect data from them for IoT applications. Earlier, it was only possible to configure devices to a local network using common connection options such as Wi-Fi, and Ethernet. But, now with the addition of the new BLE feature, you can securely build a connection between Amazon FreeRTOS devices that use BLE  to AWS IoT via Android and iOS devices. BLE support in Amazon FreeRTOS is currently available in beta. Amazon FreeRTOS is widely used in industrial applications, B2B solutions, or consumer products companies like the appliance, wearable technology, or smart lighting manufacturers. For more information, check out the official Amazon freeRTOS update post. FreeRTOS affected by 13 vulnerabilities in its TCP/IP stack Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018
Read more
  • 0
  • 0
  • 17459
article-image-the-linux-foundation-announces-the-chips-alliance-project-for-deeper-open-source-hardware-integration
Sugandha Lahoti
12 Mar 2019
2 min read
Save for later

The Linux Foundation announces the CHIPS Alliance project for deeper open source hardware integration

Sugandha Lahoti
12 Mar 2019
2 min read
In order to advance open source hardware, the Linux Foundation announced a new CHIPS Alliance project yesterday. Backed by Esperanto, Google, SiFive, and Western Digital, the CHIPS Alliance project “will foster a collaborative environment that will enable accelerated creation and deployment of more efficient and flexible chip designs for use in mobile, computing, consumer electronics, and IoT applications.” The project will help in making open source CPU chip and system-on-a-chip (SoC) design more accessible to the market, by creating an independent entity where companies and individuals can collaborate and contribute resources. It will provide the chip community with access to high-quality, enterprise-grade hardware. This project will include a Board of Directors, a Technical Steering Committee, and community contributors who will work collectively to manage the project. To initiate the process, Google will contribute a Universal Verification Methodology (UVM)-based instruction stream generator environment for RISC-V cores. The environment provides configurable, highly stressful instruction sequences that can verify architectural and micro-architectural corner-cases of designs. SiFive will improve the RocketChip SoC generator and the TileLink interconnect fabric in opensource as a member of the CHIPS Alliance. They will also contribute to Chisel (a new opensource hardware description language), and the FIRRTL intermediate representation specification. SiFive will also maintain Diplomacy, the SoC parameter negotiation framework. Western Digital, another contributor will provide high performance, 9-stage, dual issue, 32-bit SweRV Core, together with a test bench, and high-performance SweRV instruction set simulator. They will also contribute implementations of OmniXtend cache coherence protocol. Looking ahead Dr. Yunsup Lee, co-founder, and CTO, SiFive said in a statement “A healthy, vibrant semiconductor industry needs a significant number of design starts, and the CHIPS Alliance will fill this need.” More information is available at CHIPS Alliance org. Mapzen, an open-source mapping platform, joins the Linux Foundation project Uber becomes a Gold member of the Linux Foundation Intel unveils the first 3D Logic Chip packaging technology, ‘Foveros’, powering its new 10nm chips, ‘Sunny Cove’.
Read more
  • 0
  • 0
  • 17441

article-image-eeros-acquisition-by-amazon-creates-a-financial-catastrophe-for-investors-and-employees
Savia Lobo
08 Apr 2019
4 min read
Save for later

Eero’s acquisition by Amazon creates a financial catastrophe for investors and employees

Savia Lobo
08 Apr 2019
4 min read
Last month Amazon announced that it acquired the mesh Wi-Fi router company, Eero for $97 million. However, this deal, which sounded full of potential, struck Eero’s investors and employees with a financial catastrophe. Mashable, who first reported on Amazon’s acquisition, reported that Eero executives brought home multi-million dollar bonuses of around $30 million and eight-figure salary increases. However, the others did not fare well in this deal. According to Mashable, “Investors took major hits, and the Amazon acquisition rendered Eero stock worthless: $0.03 per share, down from a common stock high of $3.54 in July 2017. It typically would have cost around $3 for employees to exercise their stock, meaning they would actually lose money if they tried to cash out. Former and current Eero employees who chose not to exercise those options are now empty-handed. And those who did exercise options, investing their financial faith in the company, have lost money.” Eero devices, the first to mesh WiFi, hit the market first in the year 2016. However, companies such as Luma and NetGear launched similar products in the following year. According to an Eero former employee, another major challenge for Eero was when Google launched its own mesh network, Google Wifi, in late 2016, for just $299 whereas Eero’s was priced at $500. To remain ahead of the curve, Eero later launched a smart home security system named Hive. And Google again produced a similar product called Nest Secure. Post this, Eero abandoned Hive leading which aroused a period of confusion. “The day they killed [Hive] was the day the company changed,” a former employee told Mashable. “After Eero employees returned from the holidays, 20 percent of the staff was cut. Next came massive attrition. An ex-employee described it as a period of “desperate fear.” Morale was so low that HR disabled group emailing and prohibited employees from sending out goodbye emails to say they were leaving”, Mashable reports. After Eero announced its acquisition last month, specifics of the deal was neither disclosed by Eero nor Amazon, which led the employees to bundle up their anger against this deal. Per Mashable, “Employees tried to guess from news reports and social media what the deal meant for them. When the stock price leaked, some ex-employees breathed a sigh of relief that they didn’t exercise their options in the first place. Others were left with worthless stock and disappointment.” All employees received a letter dated February 15 which mentioned that they had four days to decide what to do with their Eero shares. Some even received the letter on or after the deadline. Source: Mashable The employees who chose to purchase or exercise their stock received a "phonebook-sized" packet of dense financial information including acquisition terms. Nick Weaver, Eero’s co-founder, wrote in the introduction, “Unfortunately, the transaction will not result in the financial return we all hoped for.” Rob Chandra, a partner at Avid Park Ventures, and lecturer at UC Berkeley’s Haas business school said, “One obvious way you can judge whether it was a great exit or not is if the exit valuation is lower than the amount of capital that was invested in the startup. So it's not a great exit.” “The documents state that after transaction costs and debt, the actual price will be closer to $54.6 million. That means that Amazon is covering around $40 million of the debt that Eero owes. Ex-employees believe the debt to be from hardware manufacturing costs, since they said that Eero took on corporate financing to actually manufacture the products”, Mashable reports. Jeff Scheinrock, a professor at the UCLA Anderson School of Management said, “What this says about it was that Eero was cash strapped. A lot of this money is going to pay off debts. They were having difficulty and probably couldn’t raise additional money, so they had to look for an exit.” To know more about this news in detail, head over to Mashable’s complete coverage. SUSE is now an independent company after being acquired by EQT for $2.5 billion JFrog acquires DevOps startup ‘Shippable’ for an end-to-end DevOps solution Amazon buys ‘Eero’ mesh router startup, adding fuel to its in-house Alexa smart home ecosystem ambitions
Read more
  • 0
  • 0
  • 17435
Modal Close icon
Modal Close icon