Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - IoT and Hardware

119 Articles
article-image-aws-reinvent-2018-amazon-announces-a-variety-of-aws-iot-releases
Prasad Ramesh
27 Nov 2018
4 min read
Save for later

AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases

Prasad Ramesh
27 Nov 2018
4 min read
At the AWS re:Invent 2018 event yesterday, Amazon announced a variety of IoT related AWS releases. Three new AWS IoT Service Delivery Designations at AWS re:Invent 2018 The AWS Service Delivery Program helps customers find and select top APN Partners who have a track record of delivering specific AWS services. The APN partners undergo a service delivery expertise related technical validation in order to get an AWS Service Delivery designation. Three new AWS IoT Service Delivery Designations are now added—AWS IoT Core, AWS IoT Greengrass, and AWS IoT Analytics. AWS IoT Things Graph AWS IoT Things Graph provides an easy way for developers to connect different devices and web services in order to build IoT applications. Devices and web services are represented as reusable components called models. These models hide the low-level details and expose states, actions, and events of underlying devices and services as APIs. A drag-and-drop interface is available to connect the models visually and define interactions between them. This can build multi-step automation applications. When built, the application to your AWS IoT Greengrass-enabled device can be deployed with a few clicks. Areas which it can be used are home automation, industrial automation, and energy management. AWS IoT Greengrass has extended functionality AWS IoT Greengrass allows bringing abilities like local compute, messaging, data caching, sync, and ML inference to edge devices. New features that extend the capabilities of AWS IoT Greengrass including can be used now: Connectors to third-party applications and AWS services. Hardware root of trust private key storage. Isolation and permission configurations that increase the AWS IoT Greengrass Core configuration options. The connectors allow you to easily build complex workflows on AWS IoT Greengrass even if you have no understanding of device protocols, managing credentials, or interacting with external APIs. Connections can be made without writing code. The security is increased due to hardware root of trust private key storage on hardware secure elements. This includes Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). By storing your private key on a hardware secure element, a hardware root of trust level security is added to existing AWS IoT Greengrass security features that include X.509 certificates. This enables mutual TLS authentication and encryption of data regardless if they are transit or at rest. The hardware secure element can be used to protect secrets that were deployed to AWS IoT Greengrass device. New configuration options allow deploying AWS IoT Greengrass to another container environment and directly access low-power devices like Bluetooth Low Energy (BLE) devices. AWS IoT SiteWise, available in limited preview AWS IoT SiteWise is a new service that simplifies collecting and organizing data from industrial equipment at scale. With this service, you can easily monitor equipment across industrial facilities to identify waste, production inefficiencies, and defects in products. With IoT SiteWise, industrial data is stored securely, is available, and searchable in the cloud. IoT SiteWise can be integrated with industrial equipment via a gateway. The gateway then securely connects on-premises data servers to collect data and send it to the AWS Cloud. AWS IoT SiteWise can be used in areas of manufacturing, food and beverage, energy, and utilities. AWS IoT Events, available in preview AWS IoT Events is a new IoT service that makes it easy to catch and respond to events from IoT sensors and applications. This service recognizes events across multiple sensors in order to identify operational issues like equipment slowdowns. It triggers alerts to notify support teams of an issue. This service offers a managed complex event detection service on the AWS cloud. Detecting events across thousands of IoT sensors, like temperature, humidity is simple. System-wide event detection and responding with appropriate actions is easy and cost-effective with AWS IoT Events. Potential areas of use include manufacturing, oil, and gas, commercial and consumer products. Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Data science announcements at Amazon re:invent 2017
Read more
  • 0
  • 0
  • 14064

article-image-microsoft-mwc-mobile-world-congress-day-1-hololens-2-azure-powered-kinect-camera-and-more
Melisha Dsouza
25 Feb 2019
4 min read
Save for later

Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more!

Melisha Dsouza
25 Feb 2019
4 min read
The ongoing Mobile World Conference 2019 at Barcelona, has an interesting line-up of announcements, keynote speakers, summits, seminars and more. It is the largest mobile event in the world, that brings together the latest innovations and leading-edge technology from more than two thousand leading companies. The theme of this year’s conference is ‘Intelligent Connectivity’ which comprises of the combination of flexible, high-speed 5G networks, the Internet of Things (IoT), artificial intelligence (AI) and big data. Microsoft unveiled a host of new products along the same theme on the first day of the conference. Let’s have a look at some of them. #1 Microsoft HoloLens 2 AR announced! Microsoft unveiled the HoloLens 2 AR device at the Mobile World Congress (MWC). This $3,500 AR device is aimed for businesses, and not for the average person, yet. It is designed primarily for situations where field workers might need to work hands-free, such as manufacturing workers, industrial designers and those in the military, etc. This device is definitely an upgrade from Microsoft’s very first HoloLens that recognized basic tap and click gestures. The new headset recognizes 21 points of articulation per hand and accounts for improved and realistic hand motions. The device is less bulky and its eye tracking can measure eye movement and use it to interact with virtual objects. It is built to be a cloud- and edge-connected device. The HoloLens 2 field of view more than doubles the area covered by HoloLens 1. Microsoft said it has plans to announce a  follow-up to HoloLens 2 in the next year or two. According to Microsoft, this device will be even more comfortable and easier to use, and that it'll do more than the HoloLens 2. HoloLens 2 is available on preorder and will be shipping later this year. The device has already found itself in the midst of a controversy after the US Army invested $480 million in more than 100,000 headsets. The contract has stirred dissent amongst Microsoft workers. #2 Azure-powered Kinect camera for enterprise The Azure-powered Kinect camera is an “Intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects, and their actions,” according to Azure VP, Julia White. This AI-powered smart enterprise camera leverages Microsoft’s 3D imaging technology and can possibly serve as a companion hardware piece for HoloLens in the enterprise. The system has a 1-megapixel depth camera, a 12-megapixel camera and a seven-microphone array on board to help it work  with "a range of compute types, and leverage Microsoft’s Azure solutions to collect that data.” The system, priced at $399, is available for pre-order. #3 Azure Spatial Anchors Azure Spatial Anchors are launched as a part of the Azure mixed reality services. These services will help developers and business’ build cross-platform, contextual and enterprise-grade mixed reality applications. According to the Azure blog, these mixed reality apps can map, designate and recall precise points of interest which are accessible across HoloLens, iOS, and Android devices. Developers can integrate their solutions with IoT services and artificial intelligence, and protect their sensitive data using security from Azure. Users can easily infuse artificial intelligence (AI) and integrate IoT services to visualize data from IoT sensors as holograms. The Spatial Anchors will allow users to map their space and connect points of interest “to create wayfinding experiences, and place shareable, location-based holograms without any need for environmental setup or QR codes”. Users will also be able to manage identity, storage, security, and analytics with pre-built cloud integrations to accelerate their mixed reality projects. #4 Unreal Engine 4 Support for Microsoft HoloLens 2 During the  Mobile World Congress (MWC), Epic Games Founder and CEO, Tim Sweeney announced that support for Microsoft HoloLens 2 will be coming to Unreal Engine 4 in May 2019. Unreal Engine will fully support HoloLens 2 with streaming and native platform integration. Sweeney says that “AR is the platform of the future for work and entertainment, and Epic will continue to champion all efforts to advance open platforms for the hardware and software that will power our daily lives.” Unreal Engine 4 support for Microsoft HoloLens 2 will allow for "photorealistic" 3D in AR apps. Head over to Microsoft's official blog for an in-depth insight on all the products released. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Microsoft joins the OpenChain Project to help define standards for open source software compliance
Read more
  • 0
  • 0
  • 13978

article-image-apple-convincingly-lobbied-against-right-to-repair-bill-in-california-citing-consumer-safety-concern
Amrata Joshi
03 May 2019
3 min read
Save for later

Apple convincingly lobbied against ‘right to repair’ bill in California citing consumer safety concern

Amrata Joshi
03 May 2019
3 min read
Apple is known for designing its products in a way that except for Apple experts none can easily repair them in case of any issues. For this, it seems the company is trying hard to kill the ‘Right To Repair’ bill in California which might work against Apple. The ‘Right To Repair’ bill which has been adopted by 18 states, is currently under discussion in California. According to this bill,  consumers will get the right to fix or mod their devices without any effect on their warranty. The company has managed to lobby California lawmakers and pushed the bill till 2020. https://twitter.com/kaykayclapp/status/1123339532068253696 In a recent report by Motherboard, an Apple representative and a lobbyist has been privately meeting with legislators in California to encourage them to go off the bill. The company is doing so by stoking fears of battery explosions for the consumers who attempt to repair their iPhones. The Apple representative argued that the consumers might hurt themselves if they accidentally end up puncturing the flammable lithium-ion batteries in their phones. In a statement to The Verge, California Assemblymember Susan Talamantes Eggman, who first introduced the bill in March 2018 and again in March 2019, said, “While this was not an easy decision, it became clear that the bill would not have the support it needed today, and manufacturers had sown enough doubt with vague and unbacked claims of privacy and security concerns.” Last quarter, Apple’s iPhone sales slowed down so the company anticipates that consumers may buy new handsets instead of getting the old one repaired. But the fact that the batteries might get punctured might bother many and will surely have enough speculations around it. Kyle Wiens, iFixit co-founder laughs at the fact about getting an iPhone battery punctured during a repair. Though he admits the possibility but according to him, it rarely happens. Wiens says, “Millions of people have done iPhone repairs using iFixit guides, and people overwhelmingly repair these phones successfully. The only people I’ve seen hurt themselves with an iPhone are those with a cracked screen, cutting their finger.” He further added, “Whether it uses gasoline or a lithium-ion battery, most every car has a flammable liquid inside. You can also get badly hurt if you’re changing a tire and your car rolls off the jack.” But a recent example from David Pierce, WSJ tech reviewer, justifies the explosion. https://twitter.com/pierce/status/1113242195497091072 With so much talk around repairing and replacing, it’s difficult to predict if the ‘Right to Repair’ bill with respect to iPhones, will come in force anytime soon. Only in 2020 we will get a clearer picture of the bill. Also, we will come to know if consumer safety is at stake or is it related to the company benefits. Apple plans to make notarization a default requirement in all future macOS updates Ian Goodfellow quits Google and joins Apple as a director of machine learning Apple officially cancels AirPower; says it couldn’t meet hardware’s ‘high standards’
Read more
  • 0
  • 0
  • 13894

article-image-apple-gets-into-chip-development-and-self-driving-autonomous-tech-business
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Apple gets into chip development and self-driving autonomous tech business

Amrata Joshi
28 Jun 2019
3 min read
Apple recently hired Mike Filippo, lead CPU architect and one of the top chip engineers from ARM Holdings, which is a semiconductor and software design company. According to Mike Filippo’s updated profile in LinkedIn, he joined Apple in May as the architect and is working out of the Austin, Texas area.  He worked at ARM for ten years as the lead engineer for designing the chips used in most smartphones and tablets. Previously, he had also worked as the key designer at chipmakers Advanced Micro Devices and Intel Corp.  In a statement to Bloomberg, a spokesman from ARM said, “Mike was a long-time valuable member of the ARM community.” He further added, “We appreciate all of his efforts and wish him well in his next endeavor.” Apple’s A series chips that are used in the mobile devices use ARM technology. For almost two decades, the Mac computers had Intel processors. Hence, Filippo’s experience in these companies could prove to be a major plus point for Apple. Apple had planned to use its own chips in Mac computers in 2020, and further replace processors from Intel Corp with ARM architecture based processors.  Apple also plans to expand its in-house chip making work to new device categories like a headset that meshes augmented and virtual reality, Bloomberg reports. Apple acquires Drive.ai, an autonomous driving startup Apart from the chip making business there are reports of Apple racing in the league of self-driving autonomous technology. The company had also introduced its own self-driving vehicle called Titan, which is still a work in progress project.  On Wednesday, Axios reported that Apple acquired Drive.ai, an autonomous driving startup valued at $200 million. Drive.ai was on the verge of shutting down and was laying off all its staff. This news indicates that Apple is interested in tasting the waters of the self-driving autonomous technology and this move might help in speeding up the Titan project. Drive.ai was in search of a buyer since February this year and had also communicated with many potential acquirers before getting the deal cracked by Apple. The company also purchased Drive.ai's autonomous cars and other assets. The amount for which Apple has acquired Drive.ai is yet not disclosed, but as per a recent report, Apple was expected to pay an amount lesser than the $77 million invested by venture capitalists. The company has also hired engineers and managers from  Waymo and Tesla. Apple has recruited around five software engineers from Drive.ai as per a report from the San Francisco Chronicle. It seems Apple is mostly hiring people that are into engineering and product design. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more!  
Read more
  • 0
  • 0
  • 13848

article-image-googles-sidewalk-lab-smart-city-project-threatens-privacy-and-human-rights-amnesty-intl-ca-says
Fatema Patrawala
30 Apr 2019
6 min read
Save for later

Google’s Sidewalk Lab smart city project threatens privacy and human rights: Amnesty Intl, CA says

Fatema Patrawala
30 Apr 2019
6 min read
Sidewalk Toronto, a joint venture between Sidewalk Labs, which is owned by Google parent company Alphabet Inc., and Waterfront Toronto, is proposing a high-tech neighbourhood called Quayside for the city’s eastern waterfront. In March 2017, Waterfront Toronto had shared a Request for proposal for this project with the Sidewalk Labs team. It ultimately got approval by Oct 2017 and is currently led by Eric Schmidt, Alphabet Inc CEO and Daniel Doctoroff, Sidewalk Labs CEO. As per reports from Daneilla Barreto, a digital activism coordinator for Amnesty International Canada, the project will normalize the mass surveillance and is a direct threat to human rights. https://twitter.com/AmnestyNow/status/1122932137513164801 The 12-acre smart city, which will be located between East Bayfront and the Port Lands, promises to tackle the social and policy challenges affecting Toronto: affordable housing, traffic congestion and the impacts of climate change. Imagine self-driving vehicles shuttling you around a 24/7 neighbourhood featuring low-cost, modular buildings that easily switch uses based on market demand. Picture buildings heated or cooled by a thermal grid that doesn’t rely on fossil fuels, or garbage collection by industrial robots. Underpinning all of this is a network of sensors and other connected technology that will monitor and track environmental and human behavioural data. That last part about tracking human data has sparked concerns. Much ink has been spilled in the press about privacy protections and the issue has been raised repeatedly by citizens in two of four recent community consultations held by Sidewalk Toronto. They have proposed to build the waterfront neighbourhood from scratch, embed sensors and cameras throughout and effectively create a “digital layer”. This digital layer may result monitoring actions of individuals and collection of their data. In the Responsible Data Use Policy Framework released last year, the Sidewalk Toronto team made a number of commitments with regard to privacy, such as not selling personal information to third parties or using it for advertising purposes. Daneilla further argues that privacy was declared a human right and is protected under the Universal Declaration of Human Rights adopted by the United Nations in 1948. However, in the Sidewalk Labs conversation, privacy has been framed as a purely digital tech issue. Debates have focused on questions of data access, who owns it, how will it be used, where it should all be stored and what should be collected. In other words it will collect the minutest information of an individual’s everyday living. For example, track what medical offices they enter, what locations they frequent and who their visitors are, in turn giving away clues to physical or mental health conditions, immigration status, whether if an individual is involved in any kind of sex work, their sexual orientation or gender identity or, the kind of political views they might hold. It will further affect their health status, employment, where they are allowed to live, or where they can travel further down the line. All of these raise a question: Do citizens want their data to be collected at this scale at all? And this conversation remains long overdue. Not all communities have agreed to participate in this initiative as marginalized and racialized communities will be affected most by surveillance. The Canadian Civil Liberties Association (CCLA) has threatened to sue Sidewalk Toronto project, arguing that privacy protections should be spelled out before the project proceeds. Toronto’s Mayor John Tory showed least amount of interest in addressing these concerns during a panel on tech investment in Canada at South by Southwest (SXSW) on March 10. Tory was present in the event to promote the city as a go-to tech hub while inviting the international audience at SXSW at the other industry events. Last October, Saadia Muzaffar announced her resignation from Waterfront Toronto's Digital Strategy Advisory Panel. "Waterfront Toronto's apathy and utter lack of leadership regarding shaky public trust and social license has been astounding," the author and founder of TechGirls Canada said in her resignation letter. Later that month, Dr. Ann Cavoukian, a privacy expert and consultant for Sidewalk Labs, put her resignation too. As she wanted all data collection to be anonymized or "de-identified" at the source, protecting the privacy of citizens. Why big tech really want your data? Data can be termed as a rich resource or the “new oil” in other words. As it can be mined in a number of ways, from licensing it for commercial purposes to making it open to the public and freely shareable.  Apparently like oil, data has the power to create class warfare, permitting those who own it to control the agenda and those who don’t to be left at their mercy. With the flow of data now contributing more to world GDP than the flow of physical goods, there’s a lot at stake for the different players. It can benefit in different ways as for the corporate, it is the primary beneficiaries of personal data, monetizing it through advertising, marketing and sales. For example, Facebook for past 2 to 3 years has repeatedly come under the radar for violating user privacy and mishandling data. For the government, data may help in public good, to improve quality of life for citizens via data--driven design and policies. But in some cases minorities and poor are highly impacted by the privacy harms caused due to mass surveillance, discriminatory algorithms among other data driven technological applications. Also public and private dissent can be discouraged via mass surveillance thus curtailing freedom of speech and expression. As per NY Times report, low-income Americans have experienced a long history of disproportionate surveillance, the poor bear the burden of both ends of the spectrum of privacy harms; are subject to greater suspicion and monitoring while applying for government benefits and live in heavily policed neighborhoods. In some cases they also lose out on education and job opportunities. https://twitter.com/JulieSBrill/status/1122954958544916480 In more promising news, today the Oakland Privacy Advisory Commission released 2 key documents one on the Oakland privacy principles and the other on ban on facial recognition tech. https://twitter.com/cfarivar/status/1123081921498636288 They have given emphasis to privacy in the framework and mentioned that, “Privacy is a fundamental human right, a California state right, and instrumental to Oaklanders’ safety, health, security, and access to city services. We seek to safeguard the privacy of every Oakland resident in order to promote fairness and protect civil liberties across all of Oakland’s diverse communities.” Safety will be paramount for smart city initiatives, such as Sidewalk Toronto. But we need more Oakland like laws and policies that protect and support privacy and human rights. One where we are able to use technology in a safe way and things aren’t happening that we didn’t consent to. #NotOkGoogle: Employee-led town hall reveals hundreds of stories of retaliation at Google Google announces new policy changes for employees to report misconduct amid complaints of retaliation and harassment #GoogleWalkout organizers face backlash at work, tech workers show solidarity
Read more
  • 0
  • 0
  • 13846

article-image-five-developer-centric-sessions-at-iot-world-2018
Savia Lobo
22 May 2018
6 min read
Save for later

Five developer centric sessions at IoT World 2018

Savia Lobo
22 May 2018
6 min read
Internet of Things has shown a remarkable improvement over the years. The basic IoT embedded devices with sensors, have now advanced to a level where AI can be deployed into IoT devices to make them smarter. The IoT World 2018 conference was held from May 14th to 17th, at Santa Clara Convention Center, CA, USA. Al special developer centric conference designed  specifically for technologists was also part of the larger conference. The agenda for the developers’ conference was to bring together the technical leaders who have contributed with their innovations in the IoT market and tech enthusiasts who look forward to develop their careers in this domain.This conference also included sessions such as SAP learning, and interesting keynotes on Intelligent Internet of things. Here are five sessions that caught our eyes at the developer conference in IoT World 2018. How to develop Embedded Systems by using the modern software practices, Kimberly Clavin Kimberly Clavin  highlighted that a major challenge in developing autonomous vehicles include, system integration and validation techniques. These techniques are used to ensure quality factor within the code. There are a plethora of companies that have software as their core and use modern software practices such as (Test Driven Development)TDD and Continuous Integration(CI) for successful development. However, the same tactics cannot be directly implemented within the embedded environment. Kimberly presented ways to adapt these modern software practices for use within the development of embedded systems. This can help developers to create systems that are fast, scalable, and a cheaper. The highlights of this session include, Learning to test drive an embedded component. Understanding how to mock out/simulate an unavailable component. Application of Test Driven Development (TDD), Continuous Integration (CI) and mocking for achieving a scalable software process on an embedded project. How to use Machine Learning to Drive Intelligence at the Edge, Dave Shuman and Vito De Gaetano Edge IoT is gaining a lot of traction of late. One way to make edge intelligent is by building the ML models on cloud and pushing the learning and the models onto the edge. This presentation session by Dave Shuman and Vito De Gaetano, shows how organizations can push intelligence to the edge via an end-to-end open source architecture for IoT. This end-to-end open source architecture for IoT is purely based on Eclipse Kura and Eclipse Kapua. Eclipse Kura is an open source stack for gateways and the edge, whereas Eclipse Kapua is an open source IoT cloud platform. The architecture can enable: Securely connect and manage millions of distributed IoT devices and gateways Machine learning and analytics capabilities with intelligence and analytics at the edge A centralized data management and analytics platform with the ability to build or refine machine learning models and push these out to the edge Application development, deployment and integration services The presentation also showcased an Industry 4.0 demo, which highlighted how to ingest, process, analyze data coming from factory floors, i.e from the equipments and how to enable machine learning on the edge using this data. How to build Complex Things in a simplified manner, Ming Zhang Ming Zhang put forth a simple question,“Why is making hardware so hard?” Some reasons could be: The total time and cost to launch a differentiated product is prohibitively high because of expensive and iterative design, manufacturing and testing. System form factors aren’t flexible -- connected things require richer features and/or smaller sizes. There’s unnecessary complexity in the manufacturing and component supply chain. Designing a hardware is a time-consuming process, which is cumbersome and not a fun task for designers, unlike software development. Ming Zhang showcased a solution, which is ‘The zGlue ZiPlet Store’ -- a unique platform wherein users can build complex things with an ease. The zGlue Integrated Platform (ZiP) simplifies the process of designing and manufacturing devices for IoT systems and provides a seamless integration of both hardware and software on a modular platform. Building IoT Cloud Applications at Scale with Microservices, Dave Chen This presentation by Dave Chen includes how DConnectivity, big data, and analytics are transforming several business types. A major challenge in the IIoT sector is the accumulation of humongous data. This data is generated by machineries and industrial equipments such as wind turbines, sensors, and so on. Valuable information out of this data has to be extracted securely, efficiently and quickly. The presentation focused on how one can leverage microservice design principles and other open source platforms for building an effective IoT device management solution in a microservice oriented architecture. By doing this, managing the large population of IoT devices securely becomes easy and scalable. Design Patterns/Architecture Maps for IoT Design patterns are the building blocks of architecture and enable developers and architects to reuse solutions to common problems. The presentation showcased how various common design patterns for connected things, common use cases and infrastructure, can accelerate the development of connected device. Extending Security to Low Complexity IoT Endpoint Devices, Francois Le At present, there are millions of low compute, low power IoT sensors and devices deployed. These devices and sensors are predicted to multiply to billions within a decade. However, these devices do not have any kind of security even though they hold such crucial, real-time information. These low complexity devices have: Very limited onboard processing power, Less memory and battery capacity, and are typically very low cost. Low complexity IoT devices cannot work similar to IoT edge device, which can easily handle validation and encryption techniques, and also have huge processing power to handle multiple message exchanges used for authentication. The presentation states that a new security scheme needs to be designed from the ground up. It must acquire lesser space on the processor, and also have a low impact on battery life and cost. The solution should be: IoT platform agnostic and Easy to implement by IoT vendors, Easily operated over any wireless technologies (e,g, Zigbee, BLE, LoRA, etc.) seamlessly Transparent to the existing network implementation. Automated and scalable for very high volumes, Evolve with new security and encryption techniques being released Last for a long time in the field with no necessity to update the edge devices with security patches. Apart from these, many other presentations were showcased at the IoT World 2018 for developers. Some of them include, Minimize Cybersecurity Risks in the Development of IoT Solutions, Internet of Things (IoT) Edge Analytics: Do's and Don’ts. Read more keynotes presented at this exciting IoT World conference 2018 on their official website. Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT How IoT is going to change tech teams AWS Greengrass brings machine learning to the edge    
Read more
  • 0
  • 0
  • 13737
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-walmart-deploy-thousands-of-robots-in-5000-stores-across-us
Fatema Patrawala
12 Apr 2019
4 min read
Save for later

Walmart to deploy thousands of robots in its 5000 stores across US

Fatema Patrawala
12 Apr 2019
4 min read
Walmart, the world’s largest retailer following the latest tech trend is going all in on robots. It plans to deploy thousands of robots for lower level jobs in its 5000 of 11, 348 stores in US. In a statement released on its blog on Tuesday, the retail giant said that it was unleashing a number of technological innovations, including autonomous floor cleaners, shelf-scanners, conveyor belts, and "pickup towers" on stores across the United States. Elizabeth Walker from Walmart Corporate Affairs says, “Every hero needs a sidekick, and some of the best have been automated. Smart assistants have huge potential to make busy stores run more smoothly, so Walmart has been pioneering new technologies to minimize the time an associate spends on the more mundane and repetitive tasks like cleaning floors or checking inventory on a shelf. This gives associates more of an opportunity to do what they’re uniquely qualified for: serve customers face-to-face on the sales floor.” Further Walmart announced that it would be adding 1,500 new floor cleaners, 300 more shelf-scanners, 1,200 conveyor belts, and 900 new pickup towers. It has been tested in dozens of markets and hundreds of stores to prove the effectiveness of the robots. Also, the idea of replacing people with machines for certain job roles will reduce costs for Walmart. Perhaps if you are not hiring people, they can't quit, demand a living wage, take sick days off etc resulting in better margins and efficiencies. According to Walmart CEO Doug McMillon, “Automating certain tasks gives associates more time to do work they find fulfilling and to interact with customers. Continuing this logic, the retailer points to robots as a source of greater efficiency, increased sales and reduced employee turnover.” "Our associates immediately understood the opportunity for the new technology to free them up from focusing on tasks that are repeatable, predictable and manual," John Crecelius, senior vice president of central operations for Walmart US, said in an interview with BBC Insider. "It allows them time to focus more on selling merchandise and serving customers, which they tell us have always been the most exciting parts of working in retail." With the war for talent raging on in the world of retail and the demand for minimum wage hikes a frequent occurrence, Walmart's expanding robot army is a signal that the company is committed to keeping labor costs down. Does that mean at the cost of cutting jobs or employee restructuring? Walmart has not specified what number of jobs it will cut as a result of this move. But when automation takes place and at the largest retailer in the US is Walmart, significant job losses can be expected to hit. https://twitter.com/NoelSharkey/status/1116241378600730626 Early last year, Bloomberg reported that Walmart is removing around 3500 store co-managers, a salaried role that acts as a lieutenant underneath each store manager. The U.S. in particular has an inordinately high proportion of employees performing routine functions that could be easily automated. As such, retail automation is bound to hit them the hardest. With costs on the rise, and Amazon as a constant looming threat that has resulted in the closing of thousands of mom-and-pop stores across the US, it was inevitable that Walmart would turn to automation as a way to stay competitive in the market. As the largest retail employer in the US, transitions to an automated retailing model, it will leave a good proposition of the 7,04,000 strong US retail workforce either unemployed, underemployed or unready to transition into other jobs. How much Walmart assists its redundant workforce to transition to another livelihood will be litmus test to its widely held image of a caring employer in contrast to Amazon’s ruthless image. How Rolls Royce is applying AI and robotics for smart engine maintenance AI powered Robotics : Autonomous machines in the making Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics
Read more
  • 0
  • 0
  • 13729

article-image-is-ros-2-0-good-enough-to-build-real-time-robotic-applications-spanish-researchers-find-out
Prasad Ramesh
11 Sep 2018
4 min read
Save for later

Is ROS 2.0 good enough to build real-time robotic applications? Spanish researchers find out.

Prasad Ramesh
11 Sep 2018
4 min read
Last Friday, a group of Spanish researchers have published a research paper titled ‘Towards a distributed and real-time framework for robots: evaluation of ROS 2.0 communications for real-time robotic applications’. This paper talks about an experimental setup exploring the suitability of ROS 2.0 for real-time robotic applications. In this paper, ROS 2.0 communications is evaluated in a robotic inter-component communication hardware case running on top of Linux. The researchers have benchmarked and studied the worst case latencies and characterized ROS 2.0 communications for real-time applications. The results indicate that a proper real-time configuration of the ROS 2.0 framework reduces jitter making soft real-time communications possible but there were also some limitations that prevented hard real-time communications. What is ROS? ROS is a popular framework that provides services for the development of robotic applications. It has utilities like a communication infrastructure, drivers for a variety of software and hardware components, libraries for diagnostics, navigation, manipulation, and other things. ROS simplifies the process of creating complex and robust robot behavior across many robotic platforms. ROS 2.0 is the new version which extends the concepts of the first version. Data Distribution Service (DDS) middleware is used in ROS 2.0 due to its characteristics and benefits as compared to other solutions. Need for real-time applications in robotic systems In all robotic systems, tasks need to be time responsive. While moving at a certain speed, robots must be able to detect an obstacle and stop to avoid collision. These robot systems often have timing requirements to execute tasks or exchange data. By not meeting the timing requirements, the system behavior will degrade or the system will fail. With ROS being the standard software infrastructure for robotic applications development, demands rose in the ROS community to include real-time capabilities. Hence, ROS 2.0 was created for delivering real-time performance. But to deliver a complete, distributed and real-time solution for robots, ROS 2.0 needs to be surrounded with appropriate elements. These elements are described in the papers Time-sensitive networking for robotics and Real-time Linux communications: an evaluation of the Linux communication stack for real-time robotic applications. ROS 2 uses DDS as its communication middleware. DDS contains Quality of Service (QoS) parameters which can be configured and tuned for real-time applications. The results of the experiment In the research paper, a setup was made to measure the real-time performance of ROS 2.0 communications over Ethernet in a PREEMPT-RT patched kernel. The end-to-end latencies between two ROS 2.0 nodes in different machines was measured. A Linux PC and an embedded device which could represent a robot controller (RC) and a robot component (C) were used for the setup. An overview of the setup can be seen as follows: Source: LinkedIn Some of the results are as follows: Source: LinkedIn The image describes the Impact of RT settings under different system load. They are a) System without additional load without RT settings. b) is system under load without RT settings. c) is system without additional load and RT settings. d) is system under load and RT settings. The results from the experiment showed that a proper real-time configuration of the ROS 2.0 framework and DDS threads greatly reduces the jitter andworst-casee latencies. This mean a smooth and fast communication. However, there were also some limitations when there is noncritical traffic in the Linux Network Stack is in picture. By configuring the network interrupt threads and using Linux traffic control QoS methods, some of the problems could be avoided. The researchers conclude that it is possible to achieve soft real-time communications with mixed-critical traffic using the Linux Network stack. However hard real-time is not possible due to the aforementioned limitations. For a more detailed understanding of the experiments and results, you can read the research paper. Shadow Robot joins Avatar X program to bring real-world avatars into space 6 powerful microbots developed by researchers around the world Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019
Read more
  • 0
  • 0
  • 13582

article-image-firedomes-endpoint-protection-solution-for-improved-iot-security
Melisha Dsouza
19 Feb 2019
3 min read
Save for later

Firedome’s ‘Endpoint Protection’ solution for improved IoT security

Melisha Dsouza
19 Feb 2019
3 min read
Last month, Firedome Inc announced the launch of the world’s first endpoint cybersecurity solutions portfolio, specifically tailored to home IoT companies and manufacturers. Firedome has developed business models that allow companies to implement top-quality endpoint cybersecurity solutions to close critical security gaps that are a byproduct of the IoT era. Home IoT devices are susceptible to cyber attacks due to the lack of regulation and budget limitations. Cryptojacking, DDoS and ransomware attacks are only a few examples of cyber crimes threaten the smart home ecosystem and consumer privacy. The low margins in this industry have led to manufacturers facing trouble in implementing high-end cybersecurity solutions. Features of ‘Firedome ‘Endpoint Protection’ solution: A lightweight software agent that can easily be added to any connected device (during the manufacturing process or later on, ‘over the air’), A cloud-based AI engine that collects and analyzes aggregated data from multiple fleets around the world, produces insights from each attack (or attack attempt) and optimizes them across the board. An accompanying 24/7 SOC team that responds to alerts, runs security researches and supports Firedome customers. Firedome solution adds a dynamic layer of protection and is not only designed to prevent attacks from occurring in the first place but also to identify attack attempts and respond to breaches in real time, thereby eliminating damage potential until a firmware update is released. The Firedome Home Solution enables industry players to provide their consumers with cyber protection and security insights for the entire home network. Moti Shkolnik, Firedome’s Co-founder and CEO says that: “We are very excited to formally launch our suite of services and solutions for the home IoT industry and we strongly believe they have the potential of changing the Home IoT cybersecurity landscape. Device companies and other ecosystem players are craving a solution that is tailored to their needs and business constraints, a solution that will address the vulnerability that is so evident in endpoint devices. Home IoT devices are becoming a commodity and the industry must address these vulnerabilities sooner rather than later. That’s why our solution is a ‘must-have’ rather than a ‘nice-to-have’” These solutions provided by Firedome has led to its selection by Universal Electronics Inc., the worldwide leader in universal control and sensing technologies for the smart home, to provide Cybersecurity Features to the Nevo® Butler Digital Assistant Platform product. To know more about this news in detail, head over to Firedome’s official website. California passes the U.S.’ first IoT security bill IoT Forensics: Security in an always connected world where things talk AWS IoT Greengrass extends functionality with third-party connectors, enhanced security, and more
Read more
  • 0
  • 0
  • 13579

article-image-researchers-at-uc-berkeleys-robot-learning-lab-introduce-blue-a-new-low-cost-force-controlled-robot-arm
Bhagyashree R
18 Apr 2019
2 min read
Save for later

Researchers at UC Berkeley's Robot Learning Lab introduce Blue, a new low-cost force-controlled robot arm

Bhagyashree R
18 Apr 2019
2 min read
Yesterday, a team of researchers from UC Berkeley's Robot Learning Lab announced the completion of their three-year-long project called Blue. It is a low-cost, high-performance robot arm that was built to work in real-world environments such as warehouses, homes, hospitals, and urban landscapes. https://www.youtube.com/watch?v=KZ88hPgrZzs&feature=youtu.be With Blue, the researchers aimed to significantly accelerate research towards useful home robots. Blue is capable of mimicking human motions in real-world environments and enables more intuitive teleoperation. Pieter Abbeel, the director of the Berkeley Robot Learning Lab and co-founder and chief scientist of AI startup Covariant, shared the vision behind this project, “AI has been moving very fast, and existing robots are getting smarter in some ways on the software side, but the hardware’s not changing. Everybody’s using the same hardware that they’ve been using for many years . . . We figured there must be an opportunity to come up with a new design that is better for the AI era. Blue design details Its dynamic properties meet or exceed the needs of a human operator, for instance, the robot has a nominal position-control bandwidth of 7.5 Hz and repeatability within 4mm. It is a kinematically-anthropomorphic robot arm with a 2 KG payload and can cost less than $5000. It consists of 7 Degree of Freedom, which includes 3 in the shoulder, 1 in the elbow, and 3 in the wrist. Blue has quasi-direct drive (QDD) actuators, which offer better force control, selectable impedance, and are highly backdrivable. These actuators make Blue resilient to damage and also makes it safer for humans to be around. The team is first distributing early release arms to developers and industry partners. We can see a product release within the next six months. The team is also planning to have a production edition of the Blue robot arm, which will be available by 2020. To read more on Blue, check out the Berkley Open Arms site. Walmart to deploy thousands of robots in its 5000 stores across US Boston Dynamics’ latest version of Handle, robot designed for logistics Setting up a Raspberry Pi for a robot – Headless by Default [Tutorial]
Read more
  • 0
  • 0
  • 13224
article-image-shadow-robot-joins-avatar-x-program-to-bring-real-world-avatars-into-space
Savia Lobo
07 Sep 2018
2 min read
Save for later

Shadow Robot joins Avatar X program to bring real-world avatars into space

Savia Lobo
07 Sep 2018
2 min read
Shadow Robots Company, experts at grasping and manipulation for robotic hands announced that they are joining a new space avatar program named AVATAR X. This program is led by ANA HOLDINGS INC. (ANA HD) and Japan Aerospace Exploration Agency (JAXA). AVATAR X aims to accelerate the integration of technologies such as robotics, haptics and Artificial Intelligence (AI), to enable humans to remotely build camps on the Moon, support long-term space missions, and further explore space from Earth. In order to make this possible, Shadow will work closely with the programme’s partners, leveraging its unique teleoperation system that it has already developed and that is also available to purchase. AVATAR X is all set to be launched as a multi-phase programme. It aims to revolutionize space development and make living on the Moon, Mars and beyond, a reality. What will AVATAR X program include? AVATAR X program will comprise of clever elements including Shadow’s Dexterous Hand, which can be controlled by a CyberGlove worn by the operator. This hand will be attached to a UR10 robot arm controllable by a PhaseSpace motion capture tool worn on the operator’s wrist. Both the CyberGlove and Motion Capture wrist tool have mapping capability so that the Dexterous Hand and the robot arm can mimic an operator’s movements. The new system allows remote control of robotic technologies while providing distance and safety. Furthermore, Shadow uses an open source platform providing full access to the code to help users develop the software for their own specific needs. Shadow’s Managing Director, Rich Walker says “We’re really excited to be working with ANA HD and JAXA on the AVATAR X programme and it gives us the perfect opportunity to demonstrate how our robotics technology can be leveraged for avatar or teleoperation scenarios away from UK soil, deep into space. We want everyone to feel involved at such a transformative time in teleoperation capabilities and encourage all those interested to enter the AVATAR XPRIZE competition.” To know more about AVATAR X in detail, visit ANA Group’s press release. Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics How Rolls Royce is applying AI and robotics for smart engine maintenance AI powered Robotics : Autonomous machines in the making  
Read more
  • 0
  • 0
  • 13193

article-image-hyundai-joins-automotive-grade-linux-and-the-linux-foundation-to-strengthen-its-innovation-plans
Amrata Joshi
07 Jan 2019
3 min read
Save for later

Hyundai joins Automotive Grade Linux and the Linux Foundation to strengthen its innovation plans

Amrata Joshi
07 Jan 2019
3 min read
Last week, Automotive Grade Linux, a collaborative project for developing an open platform for the connected car announced that Hyundai joined Automotive Grade Linux (AGL) and the Linux Foundation for innovation through open source. It is a cross-industry effort that brings together automakers, suppliers and technology companies to accelerate the development and adoption of an open software stack for the connected car. Automotive Grade Linux Dan Cauchy, Executive Director of Automotive Grade Linux at the Linux Foundation said, “Hyundai has been active in open source for years, and their experience will benefit the entire AGL community.”  He further added, “This is a significant milestone for us, as the rapid growth of AGL proves that automakers are realizing the business value that open source and shared software development can provide. We look forward to working with Hyundai as we continue on our path to develop open source solutions for all in-vehicle technology.” With Linux being at its core, AGL is focused on In-Vehicle-Infotainment (IVI). It is the only organization, which is planning to address all software in the vehicle, including heads-up display, instrument cluster, telematics, advanced driver assistance systems (ADAS) and autonomous driving. The Linux Foundation Collaborative Projects are independently funded software projects which power collaborative development to bring in innovation across industries and ecosystems. AGL Unified Code Base The AGL Unified Code Base (UCB) platform is an open source software platform for telematics, infotainment, and instrument cluster applications. It provides 70% of the starting point for a production project and also includes a middleware, an operating system, and application framework. Suppliers and automakers can easily customize the platform with features and service to meet their product and customer needs. The AGL Unified Code Base has been recognized as the CES 2019 Innovation Awards Honoree in Software and Mobile Apps category. Paul Choo, Vice President and Head of Infotainment Technology Center at Hyundai Motor Company, said, “Open collaboration is essential as we realize our connected car vision. AGL has built a robust platform that offers the flexibility to design and build new services on top of it, and quickly bring them to market.” In 2019, the  AGL booth will feature open source technologies from AGL members AISIN AW, Audiokinetic, DENSO, Cognomotiv, DENSO TEN, EPAM Systems, Fiberdyne Systems, SafeRide Technologies, Tuxera, and VNC Automotive and many more. This booth will be open during CES show hours from January 8-11, 2019. Read more about this news on Linux Foundation website. Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users An update on Bcachefs- the “next generation Linux filesystem” The Linux and RISC-V foundations team up to drive open source development and adoption of RISC-V instruction set architecture (ISA)  
Read more
  • 0
  • 0
  • 13182

article-image-amazon-to-roll-out-automated-machines-for-boxing-up-orders-thousands-of-workers-job-at-stake
Amrata Joshi
14 May 2019
3 min read
Save for later

Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake

Amrata Joshi
14 May 2019
3 min read
Recently Amazon has shown tremendous growth in bringing in automation to warehouses. But now it seems Amazon is taking up automation and AI on another level with respect to bringing in technologies for replacing manual work. Last year, Amazon started incorporating technology to a handful of warehouses to scan goods coming down a conveyor belt. Amazon is now all set to roll out its specially-made automated machines that are capable of boxing up orders, overtaking a manual job which is currently held by thousands of workers, Reuters reports. Currently, the company has considered installing two machines at more than a dozen warehouses, while removing at least 24 job roles at each one of them. This set-up usually involves around more than 2,000 people. And if automation will be implemented it will lead to more than 1,300 job cuts across 55 U.S. fulfillment centers for standard-sized inventory. But the company is expecting to recover the cost of machines in two years, at around $1 million per machine plus operational expenses. Though the changes regarding the machines haven’t been finalized yet because vetting the technology might take some more time. In a statement to Reuters, an Amazon spokesperson said, “We are piloting this new technology with the goal of increasing safety, speeding up delivery times and adding efficiency across our network. We expect the efficiency savings will be re-invested in new services for customers, where new jobs will continue to be created.” Boxing multiple orders per minute over 10 hours is a very difficult job. The latest machines, known as the CartonWrap from CMC Srl, an Italian firm, pack the boxes much faster than humans. They can manage to pack 600 to 700 boxes per hour, or four to five times the rate of a human packer. The employees of the company might be trained to take up more technical roles. According to a spokesperson from Amazon, the company is not just aiming at speeding up the process but its aim is to work on efficiency and savings. An Amazon spokesperson said, “It’s truly about efficiency and savings.” But Amazon’s hiring deals with governments have a different story to tell! The company announced 1,500 jobs last year in Alabama, and the state had promised Amazon $48.7 million in return over 10 years. Well, Amazon is not the only one in this league of automation, as even Walmart has plans to deploy thousands of robots for lower level jobs in its 348 stores in US. Walmart aims to bring in autonomous floor cleaners, shelf-scanners, conveyor belts, and “pickup towers” on stores. Looking at the pace where companies like Amazon and Walmart are heading to implement technology in retail space, the future for advanced tech-enabled warehouses are near. But this will be at the cost of jobs of the existing workers who will be at stake. Amazon S3 is retiring support for path-style API requests; sparks censorship fears Amazon introduces S3 batch operations to process millions of S3 objects Amazon finally agrees to let shareholders vote on selling facial recognition software
Read more
  • 0
  • 0
  • 13127
article-image-uber-and-gm-cruise-are-open-sourcing-their-automation-visualization-systems
Amrata Joshi
20 Feb 2019
4 min read
Save for later

Uber and GM Cruise are open sourcing their Automation Visualization Systems

Amrata Joshi
20 Feb 2019
4 min read
Yesterday, Uber and GM Cruise announced that they are open sourcing their respective Autonomous Visualization Systems (AVS). It is a new way for the industry to understand and share its data. AVS has become a new standard for describing and visualizing autonomous vehicle perception, motion, and planning data while offering a web-based toolkit for building applications. AVS makes it easier for developers to make a decision with respect to their development. It is free for users, so it might encourage developers to come up with interesting developments for the autonomous industry. AVS acts as a standardized visualization layer that frees developers from building custom visualization software for their autonomous vehicles. Developers can now focus on core autonomy capabilities for drive systems, remote assistance, mapping, and simulation, with the help of AVS abstracting visualization. There is a need to have operators understand why their cars make certain decisions. The visualization system helps engineers to break out and playback certain trip intervals for closer inspection. AV operators rely on off-the-shelf visualization systems that aren’t designed with self-driving cars in mind. They are usually limited to bulky desktop computers that are difficult to navigate. Uber has taken a move towards its web-based visualization platform so operators don’t have to learn complex computer graphics and data visualization. Uber opted for XVIZ and streetscape.gl Autonomous vehicle development is rapidly evolving with new services, data sets, and many use cases that require new solutions. The team at Uber had unique requirements that needed to be addressed. The team wanted to manage the data while retaining performance comparable to desktop-based systems. So the team built a system around two key pieces: XVIZ, that provides the data (including management and specification) and streetscape.gl which is the component toolkit to power web applications. Uber’s new tool seems to be more geared to AV operators specifically. While talking about its Autonomous Visualization System, the company said, “It is a customizable web-based platform that allows self-driving technology developers — big or small — to transform their vehicle data into an easily digestible visual representation of what the vehicle is seeing in the real world.” XVIZ XVIZ provides a stream-oriented view of a scene changing over time and also a user interface display system. Users can randomly seek and understand the state of the world at that point. Just like an HTML document, it’s presentation is focused and structured according to a schema that allows for introspection. It also allows for easy exploration and interrogation of the data. streetscape.gl streetscape.gl is a toolkit used for developing web applications that consume data in the XVIZ protocol. It also offers components for visualizing XVIZ streams in 3D viewports, charts, tables, videos, and more. It addresses common visualization challenges such as time synchronization across data streams, coordinate systems, cameras, dynamic styling, and interaction with 3D objects and cross components. Voyage co-founder Warren Ouyang said, “We’re excited to use Uber’s autonomous visualization system and collaborate on building better tools for the community going forward.” Last week, in a Medium post, Cruise introduced its graphics library of two- and three-dimensional scenes called “Worldview.” It provides 2D and 3D cameras, keyboard and mouse movement controls, click interaction, and a suite of built-in drawing commands. Developers can build custom visualizations easily, without having to learn complex graphics APIs or write wrappers to make them work with React. In a statement to Medium, Cruise said, “We hope Worldview will lower the barrier to entry into the powerful world of WebGL, giving web developers a simple foundation and empowering them to build more complex visualizations.” Uber releases Ludwig, an open source AI toolkit that simplifies training deep learning models for non-experts Automation and Robots – Trick or Treat? Home Assistant: an open source Python home automation hub to rule all things smart  
Read more
  • 0
  • 0
  • 13039

article-image-apple-launches-ipad-pro-updates-macbook-air-and-mac-mini
Prasad Ramesh
31 Oct 2018
3 min read
Save for later

Apple launches iPad Pro, updates MacBook Air and Mac mini

Prasad Ramesh
31 Oct 2018
3 min read
At an event in Brooklyn, New York yesterday, Apple unveiled the new iPad Pro, the new MacBook Air, and Mac mini. iPad Pro Following the trend, the new iPad Pro sports a larger screen to body ratio with minimal bezels. Powering the new iPad is an eight-core A12X Bionic chip which is powerful enough for Photoshop CC, coming in 2019. There is a USB-C connector, Gigabit-class LTE, and up to 1TB of storage. There are two variants with 11-inch and 12.9-inch Liquid Retina displays. Source: Apple The display can go up to 120Hz for smooth scrolling but the headphone jack is removed. Battery life is stated to be 10-hour long. The dedicated Neural engine supports tasks requiring machine learning from photography to AR. Apple is calling it the ‘best device ever for AR’ due to its cameras, sensors, improved four-speaker audio combined with the power of the A12X Bionic chip. There is also a second generation Apple Pencil that magnetically attaches to iPad Pro and charges at the same time. The Smart Keyboard Folio is made for versatility. The Keyboard and Apple Pencil are sold separately. MacBook Air The new MacBook Air features a 13-inch Retina display, Touch ID, a newer i5 processor, and more portable design compared to the previous MacBook. This MacBook Air is the cheapest Macbook to sport a Retina display with a resolution of 2560×1600. There is a built-in 720p FaceTime camera. For better security, there is TouchID—a fingerprint sensor built into the keyboard and a T2 security chip. Source: Apple Each key is individually lit in the keyboard and the Touchpad area is also larger. The new MacBook Air comes with an 8th generation Intel Core i5 processor, Intel UHD Graphics, and a faster 2133 MHz RAM up to 16GB. The storage options are available up to 1.5TB. There are only two Thunderbolt 3 USB-C ports and a 3.5mm headphone jack, no other connectors. Apple says that the new MacBook Air is faster and provides a snappier experience. Mac mini The Mac got a big performance boost being five times faster than the previous one. There are options for either four- or six-core processors, with turbo boost that can go upto 4.6GHz. Both versions come with an Intel UHD Graphics 630. For memory, there is up to 64GB of 2666 MHz RAM. Source: Apple The new Mac mini also features a T2 security chip. So files stored on the SSD are automatically and fully encrypted. There are four Thunderbolt 3 ports and a 10-gigabit Ethernet port. There is HDMI 2.0 port, two USB-A ports, a 3.5mm audio jack. The storage options are available up to 2TB. Apple says that both the MacBook Air and the Mac mini are made with 100% recycled aluminum which reduces the carbon footprint of these devices by 50%. Visit the Apple website to see availability and pricing of the iPad Pro, MacBook Air, and Mac mini. ‘Think different’ makes Apple the world’s most valuable company, crossing $1 Trillion market cap Apple releases iOS 12 beta 2 with screen time and battery usage updates among others Apple and Amazon take punitive action against Bloomberg’s ‘misinformed’ hacking story
Read more
  • 0
  • 0
  • 12743
Modal Close icon
Modal Close icon