Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - IoT and Hardware

119 Articles
article-image-ai-powered-robotics-autonomous-machines-in-the-making
Savia Lobo
16 Apr 2018
7 min read
Save for later

AI powered Robotics : Autonomous machines in the making

Savia Lobo
16 Apr 2018
7 min read
Say Robot to someone today, and Sophia the humanoid just flashes in front of the eye. This is where robotics has reached, at present; super charged by Artificial Intelligence.  Robotics and Artificial Intelligence are usually confused terms; where there is a thin line between the two. Traditional robots are pre-programmed humanoids or machines meant to do specific tasks irrespective of the environment they are placed in. Therefore, they do not show any intelligent behaviour. With a sprinkle of Artificial Intelligence, these robots have transformed into Artificially intelligent robots, which are now controlled by the AI programs making them capable of taking decisions when encountered by real world situations. How has AI helped Robotics You can look at Artificial intelligence loosely as General or narrow based on the level of task specificity. General AI could be the one from the movie Terminator or Matrix. It imparts wider knowledge and capabilities to machines that are almost similar to humans. However, general AI is way too far in the future and does not exist yet. Current robots are designed to assist humans in their day-to-day tasks in specific domains. For instance, the Roomba Vacuum cleaner is largely automated with very less human intervention. The cleaner can make decisions if it is confronted with choices such as, if the way ahead is blocked by a couch. The cleaner might decide to turn left because it has already vacuumed the carpet to the right. Let’s have a look at some basic capabilities that Artificial Intelligence has imparted into robotics with the example of a self-driving car: Adding power of perception and reasoning: Novel sensors including Sonar sensors, Infrared sensors, Kinect sensors, and so on and their functionalities give robots good perception skills, using which they can self-adapt to any situations. Our self-driving car, with the help of these sensors takes the input data from the environment (such as identifying roadblocks, signals, objects (people), others cars) and labels it, transforms it into knowledge, and interprets it. It then modifies its behaviour based on the result of this perception and takes necessary actions. Learning process: With newer experiences such as heavy traffic, detour, and so on, the self-driving car is required to perceive and reason, in order to obtain conclusions. Here, the AI creates a learning process when similar experiences are repeated in order to store knowledge and speed up intelligent responses. Making correct decisions: With AI the driverless car gets the ability to prioritize actions such as taking another route in case of an accident or detour, or applying sudden brakes when a pedestrian or an object appears suddenly, and so on, in order to be safe and effective in the decisions that they make. Effective Human interaction: This is the most prominent capability that is enabled by Natural Language Processing (NLP). Driverless car accepts and understands the passenger commands with the help of the In-car voice commands based on NLP. Thus, the AI in the car understands the meaning of natural human language and readily responds to the query thrown at it. For instance, based on the destination address given by the passenger, the AI will drive along the fastest route to get there. NLP also helps in understanding human emotions and sentiments. Real-world Applications of AI in Robotics Sophia the humanoid is by far the best real-world amalgamation of Robotics and Artificial Intelligence. However, there other real-world use cases of AI in robotics with practical applications include: Self - supervised learning : This allows robots to create their own training examples for performance improvement. For instance, if the robot has to interpret long-range ambiguous sensor data, it uses apriori training and data that it captured from close range. This knowledge is later incorporated within the robots and within the optical devices that can detect and reject objects (dust and snow, for example). The robot is now capable of detecting obstacles and objects in rough terrain and in 3D-scene analysis and modeling vehicle dynamics. An example of self- supervised learning algorithm is, a road detection algorithm. The front-view monocular camera in the car uses road probabilistic distribution model (RPDM) and fuzzy support vector machines (FSVMs). This algorithm was designed at MIT for autonomous vehicles and other mobile on-road robots. Medical field : In the medical sphere, a collaboration through the Cal-MR: Center for Automation and Learning for Medical Robotics, between researchers at multiple universities and a network of physicians created Smart Tissue Autonomous Robot (STAR). Using innovations in autonomous learning and 3D sensing, STAR is able to stitch together ‘pig intestines’ (used instead of human tissue) with better precision and reliability than the best human surgeons. STAR is not a replacement for surgeons, but in future could remain on standby to handle emergencies and assist surgeons in complex surgical procedures. It would offer major benefits in performing similar types of delicate surgeries. Assistive Robots : these are robots that sense, process sensory information, and perform actions that benefit not only the general public but also people with disabilities, or senior citizens. For instance, Bosch’s driving assistant systems are equipped with radar sensors and video cameras, allowing them to detect these road users even in complex traffic situations. Another example is, the  MICO robotic arm, which uses Kinect sensor. Challenges in adopting AI in Robotics Having an AI robot means lesser pre-programming, replacement of manpower, and so on. There is always a fear that robots may outperform humans in decision making and other intellect tasks. However, one has to take risks to explore what this partnership could lead to. It is obvious that casting an AI environment in robotics is not a cakewalk and there are challenges that experts might face. Some of them include, Legal aspects: After all robots are machines. What if something goes wrong? Who would be liable? One way to mitigate bad outcomes is by developing extensive testing protocols for the design of AI algorithms, improved cybersecurity protections, and input validation standards. This would require AI experts who not only have a deeper understanding of the technologies, but also experts from other disciplines such as law, social sciences, economics and more. Getting used to an automated environment: While it was necessary for traditional robots to be pre-programmed, with AI this will change to a certain extent and experts would just have to feed in the initial algorithms and further changes would be adapted by the robot by self-learning. AI is feared for having the capacity to take over jobs and automate many processes. Hence, broad acceptance of the new technology is required and a careful and managed transition of workers should be carried out. Quick learning with less samples: The AI systems within the robots should assist them in learning quickly even when the supply of data is limited, unlike deep learning which requires hoards of data to formulate an output. The AI-robotics fortune The future for this partnership is bright as robots become more self-dependant and might as well assist humans in their decision making. However, all of this seems like a work of fiction for now. At present, we mostly have semi-supervised learning which requires a certain human touch for essential functioning of AI systems. Unsupervised learning, one shot learning, meta-learning techniques are also creeping in, promising machines that would not require human intervention or guidance any more. Robotics manufacturers such as Silicon Valley Robotics, Mayfield robotics and so on together with auto-manufacturers such as Toyota, BMW are on a path to create autonomous vehicles, which implies that AI is becoming a priority investment for many.
Read more
  • 0
  • 0
  • 15042

article-image-microsoft-launches-surface-go-tablet-at-just-399
Natasha Mathur
10 Jul 2018
3 min read
Save for later

Microsoft launches Surface Go tablet at just $399

Natasha Mathur
10 Jul 2018
3 min read
Microsoft stepped up its tablet game by releasing the all-new Surface Go yesterday. The 10-inch Windows tablet looks exactly likes its expensive and popular counterpart, the Surface Pro device. Only smaller, less powerful and way cheaper, starting at $399. It includes features such as a 10-inch screen, front-facing camera with facial recognition, USB-C 3.1 port, and an integrated kickstand among others. Source: Microsoft Mechanics Let’s have a look at the features that make this tablet all the more alluring: Design Surface Go comes with a  built-in kickstand which includes unlimited positions. It has got corners that are slightly round as compared to the latest Surface Pro and a user familiar magnesium design surface. It weighs 1.15 lbs, making it a bit heavier as compared to the iPad but lighter than the Surface Pro. It consists of large bezels surrounding the screen which provides a place to hold the tablet. You get wider keyboard attachment with these bezels even though they make the tablet look quite dated when compared to the latest versions of the iPad. Display The Go has a smaller 3:2 aspect ratio display (1800 x 1200 pixel resolution). Its 3:2 touchscreen makes it easy for the users to use Go in landscape mode for more productivity. It also supports all the split-screen and multitasking modes that are available in Windows 10. Processor It comes with Intel’s Pentium Gold 4415Y processor with a RAM of either 4GB or 8GB. It provides storage of 64GB  eMMC or a 128GB SSD. Its processor is a dual-core seventh-generation model. According to Microsoft, this was chosen as it is able to provide the right balance between performance, battery life, and thermal properties which allows for a thin, fanless design. Battery Microsoft Surface Go has a USB-C 3.1 port which is Microsoft’s signature surface connector. This helps charge the tablet along with outputting video and data to external devices. Microsoft says the Surface Go tablet comes with up to nine hours of battery life. Additional Features Operating System: Go Tablet runs Windows 10 with S mode enabled. With Go, you can access only Edge browser and apps that are available in the Microsoft Store. Keyboard: The Surface Go consists of an additional keyboard cover which is available in four different colors, and can work with an optional Surface Pen. The Surface Go’s Type Cover provides “laptop-class typing” that comes with a scissor-key mechanism as well as 1 mm of key travel. The trackpad is much larger than the trackpad on the current Type Cover for the Surface Pro. If you add the keyboard, it will increase the price of the Go tablet to $99 or $129 ( depending on which color you choose) while the Pen adds another $99. There’s also a new $34.99 Surface Mobile Mouse. It is an ambidextrous, and two-button Bluetooth mouse which comes with a scroll wheel. It is available in colors like silver, red, and blue which matches the keyboard cover and pen. The new Surface Go is available for pre-order starting today and will start shipping in August. Leap Motion open sources its $100 augmented reality headset, North Star HTC Vive Focus 2.0 update promises long battery life, among other things for the VR headset
Read more
  • 0
  • 0
  • 14989

article-image-amazon-alexa-is-hipaa-compliant-bigger-leap-in-the-health-care-sector
Amrata Joshi
05 Apr 2019
4 min read
Save for later

Amazon Alexa is HIPAA-compliant: bigger leap in the health care sector

Amrata Joshi
05 Apr 2019
4 min read
Amazon has been exploring itself in the health care sector since quite some time now. Just last year, Amazon bought the online pharmacy PillPack for $1 billion in order to sell the prescription drugs. The company introduced Amazon Comprehend Medical, a machine learning tool that allows users to extract relevant clinical information from the unstructured text in patient records. Amazon is even working with Accenture and Merck to develop a cloud-based platform for collaborators across the life sciences industry with a motive to bring innovation in the drug development research. Amazon has now taken a bigger leap by announcing its voice assistant, Alexa as HIPAA (Health Insurance Portability and Accountability Act) compliant, which means that it can work with health care and medical software developers in order to invent new programs or skills with voice and provide better experiences to their customers. With the help of Amazon Alexa, developers will design new skills to help customers manage their healthcare needs at home by simply using voice. Patients will now be able to book a medical appointment, access the hospital post-discharge instructions or check on the status of a prescription delivery, and much more just via the voice! HIPAA has been designed to protect patients in cases where their personal health information is shared with health care organizations such as hospitals. This will allow healthcare companies to build Alexa voice tools capable of securely transmitting the patient’s private information. The consumers will now be able to use new Alexa health skills for asking questions such as “Alexa, pull up my blood glucose readings” or “Alexa, find me a doctor,” and will receive a response from the voice assistant. The company further announced the launch of six voice programs including Express Scripts, My Children's Enhanced Recovery After Surgery (ERAS), Cigna Health Today, Swedish Health Connect, Atrium Health, and Livongo. These new tools allow patients to use Alexa for accessing personalized information such as prescription, progress updates after surgery, and much more. Rachel Jiang, a member of Amazon’s health and wellness team, who previously worked at Microsoft and Facebook announced that Amazon has invited six healthcare partners to use its HIPAA-compliant skills kit to build voice programs. But the company expects to get more healthcare providers on board to access its information. Jiang wrote in a post, “These new skills are designed to help customers manage a variety of healthcare needs at home simply using voice – whether it’s booking a medical appointment, accessing hospital post-discharge instructions, checking on the status of a prescription delivery, and more.” Boston Children’s Hospital now has a new HIPAA-compliant skill dubbed “ERAS” for kids that are discharged from the hospital and for their families. With the help of Alexa’s voice assistant, patients and their families or caregivers can now ask questions to the care team about their case. Even the doctors can now remotely check in on the child’s recovery process. Livongo, a digital health start-up, works with employers in order to help them in managing workers with chronic medical conditions. Livongo developed a skill for people with diabetes that uses connected glucometers that would ask about the patient’s blood sugar levels. In a statement to CNBC, Livongo’s president Jenny Schneider told that “There are lots of reasons she expects users to embrace voice technologies, versus SMS messaging or other platforms. Some of those people might have difficulty reading, or they just have busy lives and it’s just an easy option.” Express Scripts, a pharmacy benefit management organization is working towards building a way for members to check the status of their home delivery prescription via Alexa. Voice technology has been booming in the health care sector and skills like the ones mentioned above will bring health care to home and make the patients lives easy and cost-effective. John Brownstein, chief innovation officer for Boston Children’s Hospital, said, “We’re in a renaissance of voice technology and voice assistants in health care. It’s so appealing as there’s very little training, it’s low cost and convenient.” To know more about this news, check out  Amazon’s official announcement. Amazon won’t be opening its HQ2 in New York due to public protests MariaDB announces MariaDB Enterprise Server and welcomes Amazon’s Mark Porter as an advisor to the board of directors Over 30 AI experts join shareholders in calling on Amazon to stop selling Rekognition, its facial recognition tech, for government surveillance  
Read more
  • 0
  • 0
  • 14988

article-image-arm-releases-free-cortex-m-processor-cores-for-fpgas-includes-measures-to-combat-fossi-threat
Melisha Dsouza
03 Oct 2018
3 min read
Save for later

Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat

Melisha Dsouza
03 Oct 2018
3 min read
At the Xilinx Developer Forum in San Jose, Arm announced its collaboration with Xilinx, the market leader in FPGAs. The collaboration plans to bring the benefits of Arm Cortex-M processors to FPGA through the Arm DesignStart program, thus providing scalability and a standardized processor architecture across the Xilinx portfolio. Users can expect a fast, completely no-cost access to soft processor IP, while taking advantage of the easy design integration with Xilinx tools and comprehensive software development solutions to accelerate success on FPGA. These processors will enable embedded developers to design and innovate confidently, while benefiting from simplified software development and superior code density. In addition, products can be easily scaled on these processors, thanks to the support of the broadest technology ecosystem of software, tools, and services provided by the team. Arm for FPGA comes with the following benefits: #1 Maximum choice and flexibility Users can obtain an easy and instant access to Cortex-M1 and Cortex-M3 soft processor IP for FPGA integration with Xilinx products. They will not be charged any license fee or royalties for this access. #2 Reduced software costs The processor focuses on reducing software costs while obtaining maximum reuse of software across an entire OEM’s product portfolio on a standardized CPU architecture, scaling from single board computers through to FPGAs. #3 Ease of design The team has ensured an easy integration with Xilinx system and peripheral IP through Vivado Design Suite. They use a drag-and-drop design approach to create FPGA systems with Cortex-M processors. The extensive software ecosystem and knowledge base of others designing on Arm, will ultimately result in reducing the time to market for these processors #4 Measures to Combat FOSSi (free and open source silicon) threat Arm Cortex-M1 includes a mandatory license agreement, which contains clauses against reverse-engineering. The clause also prevents the use of these cores for comparative benchmarking. These clauses will assist Arm to enable IP holds up against the latest and greatest FOSSi equivalents (like RISC-V) when running on the same FPGAs. This is not the first time that Arm has raised its voice against FOSSi threats. Earlier this year, they had also launched an aggressive marketing campaign specifically targeting RISC-V. The Arm and Xilinx collaboration will enable developers to take advantage of the benefits of heterogeneous computing on a single processor architecture. To know more about this news, head over to Arm’s official blog. Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and Arm chipsets Qualcomm announces a new chipset for standalone AR/VR headsets at Augmented World Expo
Read more
  • 0
  • 0
  • 14947

article-image-real-time-motion-planning-for-robots-made-faster-and-efficient-with-rapidplan-processor
Melisha Dsouza
19 Dec 2018
4 min read
Save for later

Real-time motion planning for robots made faster and efficient with RapidPlan processor

Melisha Dsouza
19 Dec 2018
4 min read
Yesterday, Realtime Robotics announced in a guest post in the IEEE spectrum that they have developed a new processor called ‘RapidPlan’ which tackles the bottleneck faced in a robot’s motion planning. Motion planning determines how to move a robot, or autonomous vehicle, from its current position to a desired goal configuration. Although the concept sounds simple, it is far from the same. Not only does the robot have to reach the goal state, but it also has to avoid any obstacles while doing so. According to a study, this process of collision detection- determining which edges in the roadmap (i.e., motions) cannot be used because they will result in a collision-  consumed 99 percent of a motion planner’s computing time. Traditionally, motion planning has been implemented in software running on high-performance commodity hardware. The software implementation, however, causes a multiple second delay, making it impossible to deploy robots in dynamic environments or environments with humans. Such robots can only be used in controlled environments with just a few degrees of freedom. The post suggests that motion planning can be sped up using more hardware resources and software optimizations. However, the vast computational resources of GPUs and sophisticated software maximize performance and consume a large amount of power. They cannot compute more than a few plans per second. Changes in a robot’s task or scenario often require re-tuning the software. How does RapidPlan work? A robot moving from one configuration to another configuration sweeps a volume in 3D space. Collision detection determines if that swept volume collides with any obstacle (or with the robot itself). The surfaces of the swept volume and the obstacles are represented with meshes of polygons. Collision detection comprises of computations to determine whether these polygons intersect. The challenge is titime-consumings each test to determine if two polygons intersect involves cross products, dot products, division, and other computations, and there can be millions of polygons intersection tests to perform. RapidPlan overcomes the above mentioned bottleneck and achieves general-purpose, real-time motion planning, to achieve sub-millisecond motion plans. These processors convert the computational geometry task into a much faster lookup task. At the design time itself, the processors can precompute data that records what part of 3D space these motions collide with, for a large number of motions between configurations. This precomputation— which is offline and based on simulating the motions to determine their swept volumes— is loaded onto the processor to be accessed at runtime. At runtime, the processor receives sensory input that describes what part of the robot’s environment is occupied with obstacles. The processor then uses its precomputed data to eliminate the motions that would collide with these obstacles. Realtime Robotics RapidPlan processor This processor, was developed as part of a research project at Duke University, where researchers found a way to speed up motion planning by three orders of magnitude using one-twentieth the power. Their processor checks for all potential collisions across the robot’s entire range of motion with unprecedented efficiency. RapidPlan, is retargetable, updatable on-the-fly, and has the capacity for tens of millions of edges. Inheriting many of the design principles of the original processor at Duke, the processor has a reconfigurable and more scalable design for the hardware for computing whether a roadmap edge’s motion collides with an obstacle. It has the capacity for extremely large roadmaps and can partition that capacity into several smaller roadmaps in order to switch between them at runtime with negligible delay. Additional roadmaps can also be transferred from off-processor memory on-the-fly allowing the user to, for example, have different roadmaps that correspond to different states of the end effector or for different task types. Robots with fast reaction times can operate safely in an environment with humans. A robot that can plan quickly can be deployed in relatively unstructured factories and adjust to imprecise object locations and orientations. Industries like logistics, manufacturing, health care, agriculture, domestic assistants and the autonomous vehicle industry can benefit from this processor. You can head over to IEEE Spectrum for more insights on this news. MIPS open sourced under ‘MIPS Open Program’, makes the semiconductor space and SoC, ones to watch for in 2019 Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report
Read more
  • 0
  • 0
  • 14769

article-image-microsoft-azure-iot-edge-is-open-source-and-generally-available
Savia Lobo
29 Jun 2018
3 min read
Save for later

Microsoft Azure IoT Edge is open source and generally available!

Savia Lobo
29 Jun 2018
3 min read
Microsoft recently announced Azure IoT Edge to be generally available and open source. Its preview was announced at the Microsoft Build 2017, during which the company stated how this service plans to extend cloud intelligence to edge devices. Microsoft Azure IoT Edge is a fully-managed cloud service to help enterprises generate useful insights from the data collected by the Internet of things (IoT) devices. It enables one to deploy and run Artificial Intelligence services, Azure services, and custom logic directly on the cross-platform IoT devices. This, in turn, helps deliver cloud intelligence locally as per the plan. Additional features in the Azure IoT Edge include: Support for Moby container management system: Docker which is built on Moby, an open-source platform. It allows Microsoft Azure to extend the concepts of containerization, isolation, and management from the cloud to devices at the edge. Azure IoT Device Provisioning Service: This service allows customers to securely provision huge amount of devices making edge deployments more scalable. Tooling for VSCode: VSCode allows easy module development by coding, testing, debugging, and deploying. Azure IoT Edge security manager: IoT Edge security manager acts as a tough security core for protecting the IoT Edge device and all its components by abstracting the secure silicon hardware. Automatic Device Management (ADM): ADM service allows scaled deployment of IoT Edge modules to a fleet of devices based on device metadata. When a device with the right metadata (tags) joins the fleet, ADM brings down the right modules and puts the edge device in the correct state. CI/CD pipeline with VSTS : This allows managing the complete lifecycle of the Azure IoT Edge modules from development, testing, staging, and final deployment. Broad language support for module SDKs: Azure IoT Edge supports more languages than other edge offerings in the market. It includes C#, C, Node.js, Python, and Java allowing one to program the edge modules in their choice of language. There are three components required for Azure IoT Edge deployment: Azure IoT Edge Runtime Azure IoT Hub and Edge modules. The Azure IoT Edge runtime is free and will be available as open source code. Customers would require an Azure IoT Hub instance for edge device management and deployment if they are not using one for their IoT solution already. Read full news coverage at the Microsoft Azure IoT blog post. Read Next Microsoft commits $5 billion to IoT projects Epicor partners with Microsoft Azure to adopt Cloud ERP Introduction to IOT
Read more
  • 0
  • 0
  • 14758
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $24.99/month. Cancel anytime
article-image-arm-unveils-its-client-cpu-roadmap-designed-for-always-on-always-connected-devices
Bhagyashree R
22 Aug 2018
3 min read
Save for later

Arm unveils its Client CPU roadmap designed for always-on, always-connected devices

Bhagyashree R
22 Aug 2018
3 min read
Arm, the world’s leading semiconductor IP company, for the first time have disclosed its forward-looking compute performance data and a CPU roadmap for their Client Line of Business from now through 2020. Every year they introduce new world-class CPU designs that have delivered double-digit gains in instructions-per-clock (IPC) performance since 2013. Their aim is to enable the PC industry to overcome their reliance on Moore’s law and deliver a high-performance, always-on, always-connected laptop experience. Key highlights of this client compute CPU roadmap Arm’s client roadmap 2018: Earlier this year, the launch of Cortex-A76 was announced. It delivers laptop-class performance while maintaining the power efficiency of a smartphone. We can expect hearing more on the first commercial devices on 7nm towards the end of the year and coming months. 2019: Arm will be delivering the CPU codenamed ‘Deimos’ to their partners, which is a successor to Cortex-A76. ‘Deimos’ is optimized for the latest 7nm nodes and is based on DynamIQ technology. DynamIQ redefines multi-core computing by combining the big and LITTLE CPUs into a single, fully-integrated cluster with many new and enhanced benefits in power and performance for mobile to infrastructure. With these added improvements, it is expected to deliver a 15+ percent increase in compute performance. 2020: The CPU codenamed ‘Hercules’ will be available to Arm partners. Same as ‘Deimos’, it is also based on DynamIQ technology and will be optimized for both 5nm and 7nm nodes. It is expected to improve power and area efficiency by 10 percent in addition to increase in the compute performance. What does this roadmap tell us? Take advantage of the disruptive innovation 5G will bring to all client devices. The innovations from their silicon and foundry partners will help Arm SoCs (System on Chip) to breakthrough the dominance of x86 and gain substantial market share in Windows laptops and Chromebooks over the next five years. The Arm Artisan Physical IP platform and Arm POP IP will help partners get every bit of performance-per-watt they can out of their SoCs on whatever process node they choose. This latest roadmap highlights that Arm is bringing new innovations and features to the PC industry with its annual cadence design. They will be talking more about their latest product releases and ecosystem developments at Arm TechCon which will be held in October this year. To know more about their CPU roadmap, head over to Arm’s news post. SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets Intel’s Spectre variant 4 patch impacts CPU performance AMD’s $293 million JV with Chinese chipmaker Hygon starts production of x86 CPUs
Read more
  • 0
  • 0
  • 14593

article-image-arduino-now-has-a-command-line-interface-cli
Prasad Ramesh
27 Aug 2018
2 min read
Save for later

Arduino now has a command line interface (CLI)

Prasad Ramesh
27 Aug 2018
2 min read
Listening to the Arduino developer community, the Arduino team has released a command line interface (CLI) for it. The CLI is a single binary file that performs most of the features present in the IDE. There was a wide gap between using the IDE and being able to use CLI completely for everything in Arduino. The CLI will allow you to Install new libraries, create new projects, and compile projects directly from the command line. Developers will get an advantage to test their projects quickly. You can also create your own libraries and compile them directly, for your own or third-party codes. Installing project dependencies will be as easy as typing the following command: arduino-cli lib install "WiFi101” “WiFi101OTA” In addition, the CLI has a JSON interface added for easy parsing by other programs. There were many requests for makefiles integration and the support has been added for it. The Arduino CLI can run on both ARM and Intel (x86, x86_64) architectures which means it can be installed on a Raspberry Pi or on any server. Massimo Banzi, Arduino founder stated: “I think it is very exciting for Arduino, one single binary that does all the complicated things in the Arduino IDE.” The Arduino team looks forward to people seeing integrating this tool in various IDEs. In the blog post by the Arduino team they have mentioned, “Imagine having the Arduino IDE or Arduino Create Editor speaking directly to Arduino CLI – and you having full control of it. You will be able to compile on your machine or on our online servers, detect your board or create your own IDE on top of it!” CLI is a better alternative to PlatformIO and will work on all three major operating systems, Linux, Windows, and macOS. The code is open source but you will need a license for commercial use. Visit the GitHub repository to get started with Arduino CLI. How to assemble a DIY selfie drone with Arduino and ESP8266 How to build an Arduino based ‘follow me’ drone Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 14470

Prasad Ramesh
12 Sep 2018
2 min read
Save for later

The new Bolt robot from Sphero wants to teach kids programming

Prasad Ramesh
12 Sep 2018
2 min read
Sphero, a robotic toy building company announced their latest Bolt robotic ball aimed at teaching kids basic programming. It has advanced sensors, an LED matrix, and infrared sensors to communicate with other Bolt robots. The robot itself is 73mm in diameter. There’s an 8x8 LED matrix inside a transparent casing shell. This matrix displays helpful prompts like a lightning bolt when Bolt is charging. Users can fully program the LED matrix to display a wide variety of icons connected to certain actions. This can be a smiley face when a program is completed, sad face on failure or arrow marks for direction changes. The new Bolt has a longer battery life of around two hours, charges back up in six hours. It connects to the Sphero Edu app to use community created activities, or even to build your own analyze sensor data etc. The casing is now transparent instead of the opaque colored ones from previous Sphero balls. The sphere weighs 200g in all and houses infrared sensors that allow the Bolt to detect other nearby Bolts to interact with. Users can program specific interactions between multiple Bolts. The Edu app supports coding through drawing on the screen or via Scratch blocks. You can also use JavaScript to program the robot to create custom games and drawing. There are sensors to track speed, acceleration, and direction, or to drive BOLT. This can be done without having to aim since the Bolt has a compass. There is also an ambient light sensor that allows programming the Bolt based on the room’s brightness. Other than education, you can also simply drive BOLT and play games with the Sphero Play app. Source: Sphero website It sounds like a useful little robot and is available now to consumers for $149.99. Educators can also buy BOLT in 15-packs for classroom learning. For more details, visit the Sphero website. Is ROS 2.0 good enough to build real-time robotic applications? Spanish researchers find out. How to assemble a DIY selfie drone with Arduino and ESP8266 ROS Melodic Morenia released
Read more
  • 0
  • 0
  • 14423

article-image-shadow-robot-company-syntouch-and-haptx-ana-holdings-collaborate-on-haptic-robot-hand-that-can-successfully-transmit-touch-across-the-globe
Bhagyashree R
04 Mar 2019
3 min read
Save for later

Shadow Robot Company, SynTouch, and HaptX, ANA Holdings collaborate on ‘haptic robot hand’ that can successfully transmit touch across the globe

Bhagyashree R
04 Mar 2019
3 min read
A new advancement in haptic robots happened when four organizations, Shadow Robot Company, SynTouch, HaptX, and ANA Holdings came together. These companies have built the “world’s first haptic robot hand” that transmits touch to the operator, the details of which they shared on Friday. Credit: Shadow Robot Company [box type="shadow" align="" class="" width=""]Haptics is one of the growing technologies in the field of human-computer interaction that deals with sensory interaction with computers. It is basically the science of applying touch sensation and control for interaction with virtual or physical applications.[/box] How haptic robot hand works? First, the HaptX Gloves capture the motion data to control the movement of the anthropomorphic dexterous hand by Shadow Robot Company. The BioTac sensors built by SynTouch are embedded in each fingertip of the robotic hand to collect tactile data. This data is used to recreate haptic feedback by the HaptX Gloves and is transmitted to the user’s hand. The system was first demonstrated in front of all the collaborating companies. In the demo,  an operator in California used a haptic glove to control a dexterous robotic hand in London, under the guidance of a team from ANA Holdings in Tokyo. When the robot started typing on the computer keyboard, the embedded tactile sensors on the robot’s fingertips recorded the press of each key. The haptic data was shared with the human operator in California through the network in real-time. The words typed by the robot were “Hello, World!”. In the demo, the telerobot was also shown doing a bunch of other things like playing Jenga, building a pyramid of plastic cups, and moving chess pieces on a chess board. Credit: Shadow Robot Company Credit: Shadow Robot Company In an email to us, explaining the applications and importance of this advancement, Kevin Kajitani, Co-Director of ANA AVATAR within ANA Holdings, said, "This achievement by Shadow Robot, SynTouch, and HaptX marks a significant milestone towards achieving the mission of Avatar X. This prototype paves the way for industry use, including medicine, construction, travel, and space exploration." Rich Walker, Managing Director of Shadow Robot Company, said, “This teleoperation system lets humans and robots share their sense of touch across the globe - it’s a step ahead in what can be felt and done remotely. We can now deliver remote touch and dexterity for people to build on for applications like safeguarding people from hazardous tasks, or just doing a job without having to fly there! It’s not touch-typing yet, but we can feel what we touch when we’re typing!” Dr. Jeremy Fishel, Co-Founder of SynTouch, said, “We know from psychophysical studies that the sense of touch is essential when it comes to dexterity and manipulation. This is the first time anyone has ever demonstrated a telerobot with such high-fidelity haptics and control, which is very promising and would not have been possible without the great engineers and technologies from this collaboration.” Jake Rubin, Founder and CEO of HaptX, said, “Touch is a cornerstone of the next generation of human-machine interface technologies. We’re honored to be part of a joint engineering effort that is literally extending the reach of humankind.” The new Bolt robot from Sphero wants to teach kids programming Is ROS 2.0 good enough to build real-time robotic applications? Spanish researchers find out. Shadow Robot joins Avatar X program to bring real-world avatars into space  
Read more
  • 0
  • 0
  • 14370
article-image-linux-forms-urban-computing-foundation-open-source-tools-build-autonomous-vehicles-smart-infrastructure
Fatema Patrawala
09 May 2019
3 min read
Save for later

Linux forms Urban Computing Foundation: Set of open source tools to build autonomous vehicles and smart infrastructure

Fatema Patrawala
09 May 2019
3 min read
The Linux Foundation, nonprofit organization enabling mass innovation through open source, on Tuesday announced the formation of the Urban Computing Foundation (UCF). UCF will accelerate open source software to improve mobility, safety, road infrastructure, traffic congestion and energy consumption in connected cities. UCF’s mission is to enable developers, data scientists, visualization specialists and engineers to improve urban environments, human life quality, and city operation systems to build connected urban infrastructure. The founding members of UCF are Facebook, Google, IBM, UC San Diego, Interline Technologies, Uber etc. The executive director of Linux Foundation, Jim Zemlin spoke to Venturebeat, and said the Foundation will adopt an open governance model developed by the Technical Advisory Council (TAC), which will include technical and IP stakeholders in urban computing who’ll guide its work through projects by review and curation. The intent, added Zemlin, is to provide platforms to developers who seek to address traffic congestion, pollution, and other problems plaguing modern metros. Here’s the list of TAC members: Drew Dara-Abrams, principal, Interline Technologies Oliver Fink, director Here XYZ, Here Technologies Travis Gorkin, engineering manager of data visualization, Uber Shan He, project leader of Kepler.gl, Uber Randy Meech, CEO, StreetCred Labs Michal Migurski, engineering manager of spatial computing, Facebook Drishtie Patel, product manager of maps, Facebook Paolo Santi, senior researcher, MIT Max Sills, attorney, Google On Tuesday, Facebook announced their participation as a founding member of the Urban Computing Foundation (UCF). https://twitter.com/fb_engineering/status/1125783991452180481 Facebook mentions in its post that, “We are using our expertise — including a predictive model for mapping electrical grids, disaster maps , and more accurate population density maps — to improve access to this type of technology”. Further Facebook mentions that UCF will establish a neutral space for the critical work. It will include adapting geospatial and temporal machine learning techniques for urban environments and developing simulation methodologies for modeling and predicting citywide phenomena. Uber also reported about their joining and announced their contribution of Kepler.gl as the initiative’s first official project. Kepler is Uber’s open source, no-code geospatial analysis tool for creating large-scale data sets. It was released in 2018, and is currently used by Airbnb, Atkins Global, Cityswifter, Lime, Mapbox, Sidewalk Labs, and UBILabs, among others to generate visualizations of location data. While all of this set a path towards making of smarter cities, it also raises an alarm to another way of violating privacy and mishandling user data as per the history in tech. Moreover when recently Amnesty International in Canada regarded the Google Sidewalk Labs project in Toronto to normalize mass surveillance and a direct threat to human rights. Questions are raised as to the tech companies forming foundation to address traffic congestion issue but not to address the privacy violation or online extremism. https://twitter.com/shannoncoulter/status/1126199285530238976 The Linux Foundation announces the CHIPS Alliance project for deeper open source hardware integration Mapzen, an open-source mapping platform, joins the Linux Foundation project Uber becomes a Gold member of the Linux Foundation
Read more
  • 0
  • 0
  • 14364

article-image-usb-4-will-integrate-thunderbolt-3-to-increase-the-speed-to-40gbps
Amrata Joshi
05 Mar 2019
2 min read
Save for later

USB 4 will integrate Thunderbolt 3 to increase the speed to 40Gbps

Amrata Joshi
05 Mar 2019
2 min read
Just a week later, after revealing the details about USB 3.2, yesterday at the Taipei event, the USB Implementers Forum (USB-IF) team announced USB 4, the next version of the ubiquitous connector. According to the USB-IF team, with USB 4, the transferring speed will increase from 20Gbps to 40Gbps. The team at USB-IF uses Thunderbolt  3 as the foundation for USB 4. Intel provides the manufacturers with Thunderbolt 3 along with open licensing. USB4 will be integrating this technology and will become the "new" Thunderbolt 3. USB4 will now be ready for powerful PCIe plus DisplayPort devices. USB 4 can get connected with external graphics card enclosure, two 4K monitors, and other Thunderbolt 3 accessories using a single cable connected to a PC. It will also be compatible with USB 2.0 and 3.2. USB 4 will come with support for charging speeds of 100W of power, transfer speeds of 40 Gbps, and video bandwidth for two 4K displays or one 5K display. It’s most likely to get widely available and cheaper in the future. In a statement to Techspot, Brad Saunders, USB Promoter Group Chairman, said, “The primary goal of USB is to deliver the best user experience combining data, display and power delivery over a user-friendly and robust cable and connector solution. The USB4 solution specifically tailors bus operation to further enhance this experience by optimizing the blend of data and display over a single connection and enabling the further doubling of performance.” The USB-IF team plans to produce a list of features for USB 4, which will help in standardizing features such as display out and audio out. Though the exact features are yet to be determined. Few users are not much confident about USB 4. One of the users commented on HackerNews, “Maybe it will charge the device. Maybe it won't. Maybe it'll do USB hosting, maybe it won't.” Few users think that the company’s major focus is on manufacturers while user experience is secondary. Another comment reads, “USB-IF is for manufacturers, most of whom want to do whatever the cheapest quickest thing is. The user experience absolutely comes second to manufacturing cost and marking convenience.” To know more about this, check out the post by engadget. USB-IF launches ‘Type-C Authentication Program’ for better security Apple USB Restricted Mode: Here’s Everything You Need to Know Working on Jetson TX1 Development Board [Tutorial]
Read more
  • 0
  • 0
  • 14322

article-image-hybrid-nanomembranes-make-conformal-wearable-sensors-possible-demo-south-korean-researchers-with-imperceptible-loudspeakers-and-mics
Natasha Mathur
20 Sep 2018
4 min read
Save for later

Hybrid nanomembranes make conformal wearable sensors possible, demo South Korean researchers with imperceptible loudspeakers and mics

Natasha Mathur
20 Sep 2018
4 min read
A team of researchers from Ulsan National Institute of Science and Technology (UNIST) in South Korea has developed an ultrathin, and transparent wearable device that is capable of turning your skin into a loudspeaker. The device has been created to help the hearing and speech impaired people. However, it has potential applications in other domains such as wearable IoT sensors, and healthcare devices.                                                Skin-attachable NM  loudspeaker This new device is created with conductive hybrid nanomembranes (NMs) with nanoscale thickness, comprising an orthogonal silver nanowire array embedded in a polymer matrix. This helps substantially enhance the electrical as well as mechanical properties of ultrathin polymer NMs. There is no loss in the optical transparency because of the orthogonal array structure. “Here, we introduce ultrathin, conductive, and transparent hybrid NMs that can be applied to the fabrication of skin-attachable NM loudspeakers and microphones, which would be unobtrusive in appearance because of their excellent transparency and conformal contact capability” as mentioned in the research paper. Hybrid NMs help significantly enhance the electrical and mechanical properties of ultrathin polymer NMs, which can then be intimately attached to the human skin. After this, the nanomembrane is used as a loudspeaker which can be attached to almost anything to produce sounds. The researchers also introduced a similar device, which acts as a microphone that can be connected to smartphones and computers for unlocking voice-activated security systems. Skin-attachable and transparent NM loudspeaker The researchers fabricated a skin-attachable loudspeaker using hybrid NMs. This speaker is capable of emitting thermoacoustic sound with the help of temperature-induced oscillation of the surrounding air. This temperature oscillation is caused by Joule heating of the orthogonal AgNW array upon the application of an AC voltage. The sound emitted from the NM loudspeaker is then analyzed with the help of an acoustic measurement system. “We used a commercial microphone to collect and record the sound produced by the loudspeaker. To characterize the sound generation of the loudspeaker, we confirmed that the sound pressure level (SPL) of the output sound increases linearly as the distance between the microphone and the loudspeaker decreases” reads the research paper. Wearable and transparent NM microphone The researchers also designed a wearable and transparent microphone using hybrid NMs combined with micropatterned PDMS (NM microphone). This microphone is capable of detecting sound and recognizing a human voice. These wearable microphones are sensors, which are attached to a speaker's neck for sensing the vibration of the vocal folds.                                        Skin-attachable NM Microphone The skin-attachable NM microphone comprises a hybrid NM mounted to a micro pyramid-patterned polydimethylsiloxane (PDMS) film. This sandwich-like structure helps precisely detect the sound and vibration of the vocal cords by the generation of a triboelectric voltage. The triboelectric voltage results from the coupling effect of the contact electrification as well as electrostatic induction. This sensor works by converting the frictional force that is generated by the oscillation of the transparent conductive nanofiber into electric energy. The sensitivity of the NM microphone in response to sound emissions is evaluated by fabricating two device structures, such as a freestanding hybrid NM, integrated with a holey PDMS film (NM microphone), and another fully adhered to a planar PDMS film without a hole. “As a proof-of-concept demonstration, our NM microphone was applied to a personal voice security system requiring voice-based identification applications. The NM microphone was able to accurately recognize a user’s voice and authorize access to the system by the registrant only” reads the research paper.   For more details, check out the official research paper. Now Deep reinforcement learning can optimize SQL Join Queries, says UC Berkeley researchers MIT’s Transparency by Design Network: A high-performance model that uses visual reasoning for machine interpretability Swarm AI that enables swarms of radiologists, outperforms specialists or AI alone in predicting Pneumonia
Read more
  • 0
  • 0
  • 14119
article-image-helium-proves-to-be-less-than-an-ideal-gas-for-iphones-and-apple-watches
Prasad Ramesh
31 Oct 2018
3 min read
Save for later

Helium proves to be less than an ‘ideal gas’ for iPhones and Apple watches

Prasad Ramesh
31 Oct 2018
3 min read
‘Hey, turn off the Helium it’s bad for my iPhone’ is not something you hear every day. In an unusual event at a facility, this month, an MRI machine affected iPhones and Apple watches. In a facility, many iPhone users started to experience issues with their devices. The devices stopped working. Originally an EMP burst was suspected to shut down the devices. But it was noted that only iPhone 6 and above, Apple Watch series 0 and above were affected. The only iPhone 5 in the building and Android phones remained functional. Luckily none of the patients reported any issue. The reason found was a new MEMS oscillator used in the newer affected devices. These tiny devices are used to measure time and can work properly only in certain conditions like a vacuum or a specific gas surrounding the piece. Helium being a sneaky one atom gas, can get through the tiniest of crevices. An MRI machine was being installed and in the process, the coolant, Helium leaked. Approximately 120 liters of Helium leaked in the span of 5 hours. Helium expands hundreds of times when it turns to gas from liquid, with a boiling point of around −268 °C it did so in room temperature. You could say a large part of a building could be flooded with the gas given 120 liters. Apple does mention it in their official iPhone help guide: “Exposing iPhone to environments having high concentrations of industrial chemicals, including near evaporating liquified gasses such as helium, may damage or impair iPhone functionality.” So what if your device is affected? Apple also mentions: “If your device has been affected and shows signs of not powering on, the device can typically be recovered. Leave the unit unconnected from a charging cable and let it air out for approximately one week. The helium must fully dissipate from the device, and the device battery should fully discharge in the process. After a week, plug your device directly into a power adapter and let it charge for up to one hour. Then the device can be turned on again.” The original poster on Reddit, harritaco even performed an experiment and posted it on YouTube. Although much doesn't happen in the video of 8 minutes, he says to have repeated it for 12 minutes and the phone turned off. For more details and discussions, visit the Reddit thread. A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem Apple launches iPad Pro, updates MacBook Air and Mac mini Facebook and NYU are working together to make MRI scans 10x faster
Read more
  • 0
  • 0
  • 14109

article-image-the-irobot-roomba-i7-is-a-cleaning-robot-that-maps-and-stores-your-house-and-also-empties-the-trash-automatically
Prasad Ramesh
07 Sep 2018
2 min read
Save for later

The iRobot Roomba i7+ is a cleaning robot that maps and stores your house and also empties the trash automatically.

Prasad Ramesh
07 Sep 2018
2 min read
iRobot, the intelligent robot making company revealed its latest robot vacuum, the Roomba i7+ yesterday. It is a successor to the Roomba 980 which was launched in 2015. The i7+ has two key changes - it stores a map of your house and empties the trash itself. https://www.youtube.com/watch?v=HPgxcETuqzI Weighing about 7.4lbs, the Roomba i7+ is designed to be easier to manage than the previous models. The new charging base houses a larger trash bin for automatic emptying. The stationary base automatically sucks the debris out of the Roomba into the bag. The base has the capacity to hold dirt of 30 cleanings. This would mean you’ll have to empty the bigger trash bag only once a month, depending on your cleaning needs. The i7+ works with two rubber brushes, one to loosen up the dirt and another to lift and collect it. The large bag in the base traps dust so that it can’t escape. It works on iAdapt® 3.0 Navigation with vSLAM® technology both of which are patented. They allow the robot to map its surroundings and clean sections of your home systematically. It creates visual landmarks to keep track of areas it has cleaned and areas pending to clean. Source: iRobot The i7+ too like the older models connect to the iRobot Home app and can sync with virtual assistants like Alexa to schedule cleanings. Like the previous 900 series, the i7+ too maps your house, the difference being that the newer model stores the map for automatic navigation later. You can use the app to differentiate and name different rooms and control the cleaning frequency. With an assistant, you can use voice commands to clean specific rooms. The i7+ will be available in stores from October. The price tag of $949 may not appeal to everyone, but if you want your house to be cleaned automatically, this is something to consider. There is also a lower- priced model, the i7 with a price tag of $699. This version does not have a self-cleaning base or mapping features, but it can be controlled by Wifi or an assistant. You can pre-order the latest Roomba i7+ from the iRobot website. Home Assistant: an open source Python home automation hub to rule all things smart How Rolls Royce is applying AI and robotics for smart engine maintenance 6 powerful microbots developed by researchers around the world
Read more
  • 0
  • 0
  • 14103
Modal Close icon
Modal Close icon