Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-firefox-releases-v66-0-4-and-60-6-2-to-fix-the-expired-certificate-problem-that-ended-up-disabling-add-ons
Bhagyashree R
06 May 2019
3 min read
Save for later

Firefox releases v66.0.4 and 60.6.2 to fix the expired certificate problem that ended up disabling add-ons

Bhagyashree R
06 May 2019
3 min read
Last week on Friday, Firefox users were left infuriated when all their extensions were abruptly disabled. Fortunately, Mozilla has fixed this issue in their yesterday’s releases, Firefox 66.0.4 and Firefox 60.6.2. https://twitter.com/mozamo/status/1124484255159971840 This is not the first time when Firefox users have encountered such type of problems. A similar issue was reported back in 2016 and it seems that they did not take proper steps to prevent the issue from recurring. https://twitter.com/Theobromia/status/1124791924626313216 Multiple users were reporting that all add-ons were disabled on Firefox because of failed verification. Users were also unable to download any new add-ons and were shown  "Download failed. Please check your connection" error despite having a working connection. This happened because the certificate with which the add-ons were signed expired. The timestamp mentioned in the certificates were: Not Before: May 4 00:09:46 2017 GMT Not After : May 4 00:09:46 2019 GMT Mozilla did share a temporary hotfix (“hotfix-update-xpi-signing-intermediate-bug-1548973”) before releasing a product with the issue permanently fixed. https://twitter.com/mozamo/status/1124627930301255680 To apply this hotfix automatically, users need to enable Studies, a feature through which Mozilla tries out new features before they release to the general users. The Studies feature is enabled by default, but if you have previously opted out of it, you can enable it by navigating to Options | Privacy & Security | Allow Firefox to install and run studies. https://twitter.com/mozamo/status/1124731439809830912 Mozilla released Firefox 66.0.4 for desktop and Android users and Firefox 60.6.2 for ESR (Extended Support Release) users yesterday with a permanent fix to this issue. These releases repair the certificate to re-enable web extensions that were disabled because of the issue. There are still some issues that need to be resolved, which Mozilla is currently working on: A few add-ons may appear unsupported or not appear in 'about:addons'. Mozilla assures that the add-ons data will not be lost as it is stored locally and can be recovered by re-installing the add-ons. Themes will not be re-enabled and will switch back to default. If a user’s home page or search settings are customized by an add-on it will be reset to default. Users might see that Multi-Account Containers and Facebook Container are reset to their default state. Containers is a functionality that allows you to segregate your browsing activities within different profiles. As an aftereffect of this certificate issue, data that might be lost include the configuration data regarding which containers to enable or disable, container names, and icons. Many users depend on Firefox’s extensibility property to get their work done and it is obvious that this issue has left many users sour. “This is pretty bad for Firefox. I wonder how much people straight up & left for Chrome as a result of it,” a user commented on Hacker News. Read the Mozilla Add-ons Blog for more details. Mozilla’s updated policies will ban extensions with obfuscated code Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly
Read more
  • 0
  • 0
  • 17678

article-image-zeit-releases-serverless-docker-in-beta
Richard Gall
15 Aug 2018
3 min read
Save for later

Zeit releases Serverless Docker in beta

Richard Gall
15 Aug 2018
3 min read
Zeit, the organization behind the cloud deployment software Now, yesterday launched Serverless Docker in beta. The concept was first discussed by the Zeit team at Zeit Day 2018 back in April, but it's now available to use and promises to radically speed up deployments for engineers. In a post published on the Zeit website yesterday, the team listed some of the key features of this new capability, including: An impressive 10x-20x improvement in cold boot performance (in practice this means cold boots can happen in less than a second A new slot configuration property that defines resource allocation in terms of CPU and Memory, allowing you to fit an application within the set of constraints that are most appropriate for it Support for HTTP/2.0 and WebSocket connections to deployments, which means you no longer need to rewrite applications as functions. The key point to remember with this release, according to Zeit, is that  "Serverless can be a very general computing model. One that does not require new protocols, new APIs and can support every programming language and framework without large rewrites." Read next: Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 What's so great about Serverless Docker? Clearly, speed is one of the most exciting things about serverless Docker. But there's more to it than that - it also offers a great developer experience. Johannes Schickling, co-founder and CEO of Prisma (a GraphQL data abstraction layer) said that, with Serverless Docker, Zeit "is making compute more accessible. Serverless Docker is exactly the abstraction I want for applications." https://twitter.com/schickling/status/1029372602178039810 Others on Twitter were also complimentary about Serverless Docker's developer experience - with one person comparing it favourably with AWS - "their developer experience just makes me SO MAD at AWS in comparison." https://twitter.com/simonw/status/1029452011236777985 Combining serverless and containers One of the reasons people are excited about Zeit's release is that it provides the next step in serverless. But it also brings containers into the picture too. Typically, much of the conversation around software infrastructure over the last year or so has viewed serverless and containers as two options to choose from rather than two things that can be used together. It's worth remembering that Zeit's product has largely been developed alongside its customers that use Now. "This beta contains the lessons and the experiences of a massively distributed and diverse user base, that has completed millions of deployments, over the past two years." Eager to demonstrate how Serverless Docker works for a wide range of use cases, Zeit has put together a long list of examples of Serverless Docker in action on GitHub. You can find them here. Read next A serverless online store on AWS could save you money. Build one. Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 17655

article-image-ethical-mobile-operating-system-eelo-an-alternative-for-android-and-ios-is-in-beta
Prasad Ramesh
11 Oct 2018
5 min read
Save for later

‘Ethical mobile operating system’ /e/, an alternative for Android and iOS, is in beta

Prasad Ramesh
11 Oct 2018
5 min read
Right now Android and iOS are the most widely used OSes on mobile phones. Both owned by giant corporates and there are no other offerings that are in line with public interest, privacy, or affordability. Android is owned by Google, can’t say it is pro user privacy with all the tracking they do. iOS by Apple is a very closed OS and not to mention that it isn’t exactly affordable to the masses. Apart from some OSes in the works, there is an OS called /e/ or eelo from the creator of Mandrake-Linux, focused on user privacy. Some OSes in the works Some of the mobile OSes include Tizen from Samsung which it had released only with entry level smartphones. There is also an OS in the making by Huawei. Google has also been working on a new OS called Fuchsia. It uses a new microkernel called Zicron created by Google, instead of Linux. It is also in the early stages and there is no clear indication behind the purpose of building Fuchsia when Android is ubiquitous in the market. Google was fined for $5B regarding Android antitrust earlier this year, maybe Fuchsia can come into picture here. In response to EU’s decision to fine Google, Sundar Pichai said that preventing Google from bundling its apps would “upset the balance of the Android ecosystem” and that the Android business model guaranteed zero charges for the phone makers. This seems like a warning from Google to consider licensing Android to phone makers. Will curtains be closed on Android over legal disputes? That does not seem very likely considering Android smartphones and Google’s services in these smartphones are a big source of income for Google. They would not let it go that easily and I’m not sure if the world is ready to let go of the Android OS either. It has given access to apps, information, connectivity to the large masses. However, there is growing discontent among Android users, developers and handset partners. Whether that frustration will pivot enough to create a viable market for alternative mobile OS, is something only time can tell. Either way, there is one OS called /e/ or eelo intent on displacing Android. It has made some progress but is not an OS made from scratch exactly. What is eelo? The above mentioned OSes are far from complete and owned by large corporations. Here comes eelo, it is free and open-source. It is a forked LineageOS with all the Google apps and services removed. But that’s not all, it also has a select few default applications, a new user interface, and several integrated online services. The “/e/” ROM is in Beta stage and can be installed on several devices. More devices will be supported as more contributors port and maintain for different devices. The ROM uses microG instead of Google’s core apps. It uses Mozilla NLP which will make geolocation available even when GPS signal is not available. eelo project leader, Gaël Duval states: “At /e/, we want to build an alternative mobile operating system that everybody can enjoy using, one that is a lot more respectful of user’s data privacy while offering them real freedom of choice. We want to be known as the ethical mobile operating system, built in the public interest.” BlissLauncher is included with original icons and support for widgets and auto icon sizing based on screen pixel density. There are new default applications, a mail app, an SMS app (Signal), a chat application (Telegram), along with a weather app, a note app, a tasks app and a maps app. There is an /e/ account manager in which users can choose to use a single /e/ identity (user@e.email) for all services. It will also have OTA updates. The default search engine is searX with Qwant and DuckDuckGo as alternatives. They also plant to open a project in the personal assistant area. How has the market reacted to eelo? Early testers seem happy with /e/ or alternatively called as eelo. https://twitter.com/lowpunk/status/1050032760373633025 https://twitter.com/rvuong_geek/status/1048541382120525824 There are also some negative reactions where people don’t really welcome this new “mobile OS”. A comment on reddit by user JaredTheWolfy says: “This sounds like what Cyanogen tried to do, but at least Cyanogen was original and created a legacy for the community.” Another comment by user MyNDSETER on reddit reads: “Don't trust Google with your data. Trust us instead. Oh gee ok and I'll take some stickers as well.” Yet another reddit user zdakat says: “I guess that's the android version of I made my own cryptocurrency! (by changing a few strings in Bitcoin source, or the new thing: by deploying a token on Ethereum)” You can check out a detailed article about eelo on Hackernoon, and the /e/ website. A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem Microsoft Your Phone: Mirror your Android phone apps on Windows Android Studio 3.2 releases with Android App Bundle, Energy Profiler, and more!
Read more
  • 0
  • 0
  • 17654

article-image-googles-smart-display-push-towards-os-fuchsia
Amarabha Banerjee
03 Aug 2018
5 min read
Save for later

Google's Smart Display - A push towards the new OS, Fuchsia

Amarabha Banerjee
03 Aug 2018
5 min read
There is no denying that Amazon Echo, HomePod and Alexa have brought about a revolution in the digital assistant space. A careful study will however point to few common observations about these products. They are merely a representation of what the assistant is thinking. They can only perform scheduled tasks like playing a song, answering predefined questions, making a todo list. This is where the latest innovation by Lenovo on the Google powered Smart Assistant stands out. It’s not just an assistant that can perform pre-planned tasks, it’s a Smart Display. How does Google’s Lenovo Smart Display look? Source: Techhive The Google Smart Display UI comes in two versions: an 8-inch model with a 1280 x 800 resolution touchscreen for $US200, and a larger 10-inch version with a 1920 x 1200 display. Although Lenovo is the first company in the Smart Display UI domain, others like JBL, LG are also planning to come up with their own versions of this. The screen is not merely an add-on to the speaker based systems like Alexa, but it is a display based on the Google smart assistant, which really increases its functionality. The rear design is sleek and it’s easy to be used on Desktop. What does it do? The Smart Display runs on Qualcomm's 624 Home Hub platform, which is the faster of its two architectures for Android Things devices. While the Qualcomm 212 platform works well for things like smart speakers and low-power smart home accessories, the 624 Home Hub platform is better suited for the Smart Display UI. It helps to process Google Assistant requests both audio visually. How is the Smart Display different THe main question here is how is it different or better than the existing solutions like Amazon Alexa, or Echo? SImple tests of performance have yielded different results in favor of the SMart Display UI. A search on Alexa makes it search the internet and read out the first few answers. This can be interpreted as a question answer based smart system. Whereas the Smart Display UI doesn’t just bring up a relevant graphical display, it displays some relevant links and the most important aspect of your search on the screen. This is significantly faster than others. The reason being the powerful Google AI behind the system. From Android to Fuchsia The smart display UI and the already successful launch of chrome OS has triggered discussions around the possible replacement of Android with the Futuristic Fuchsia. The discussion is centered around Google’s intentions of creating a mobile OS that will have a formalized update cycle. An OS which won’t have different versions running across different devices. Chrome OS seems to be the first step, which runs on the same Linux kernel as Android. While the recent developments related to Fuchsia are still under the wraps, Google might want to finally use Android as a runtime on Fuchsia. This can be a difficult task to perform, but it is a better option than running two kernels in the overhead, one for Android and the other for Fuchsia. The signs that show that this smart display UI is a way to test waters for Fuchsia are plenty. The smart display UI is completely based on the Google Smart Assistant, which will be the core of Fuchsia. It doesn’t have a home button or an app menu button. You can navigate and search using voice command. Voice command is also at the heart of Fuchsia. Rather than swiping on screen, you will be able to navigate and search with voice command. Android P, next in line for release, is also moving towards a similar UI. Android P will display your apps as cards and will promote them to the system level and avoid a complete launch. This will help in reducing system overhead and stop apps from running perpetually. From all these indicators, it seems to be a natural progression for this smart display UI system to become the face of the Fuchsia operating system. The challenge will be in migrating the present Android phones to Fuchsia. Since Android is currently written in Java and bundled in a JVM bubble to cater to the android system, developers believe that it wouldn’t be a difficult task to create such an intermediate layer for Fuchsia which is written in Dart. The shift however seems a bit far fetched for now. Some discussions on reddit suggest it might be as as late as 2022.  But the drive for this change is pretty clear, The need to have a uniform OS across all devices The independence of the OS performance on system resources Taking backend operations to the cloud Making the UI voice controlled and reducing the touchscreen for a mere visual tool. We can only hope that Fuchsia can solve the problems for the users that Android couldn’t. Uniform mobile computing platform can be a good start, and Fuchsia seems to be the perfect successor to carry forward the legacy of Android and its huge fan base. Android P Beta 4 is here, stable Android P expected in the coming weeks! Is Google planning to replace Android with Project Fuchsia? Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019
Read more
  • 0
  • 0
  • 17637

article-image-facebook-open-sources-pytext-a-pytorch-based-nlp-modeling-framework
Amrata Joshi
17 Dec 2018
4 min read
Save for later

Facebook open-sources PyText, a PyTorch based NLP modeling framework

Amrata Joshi
17 Dec 2018
4 min read
Last week, the team at Facebook AI Research announced that they are open sourcing  PyText NLP framework. PyText, a deep-learning based NLP modeling framework, is built on PyTorch. Facebook is outsourcing some of the conversational AI techs for powering the Portal video chat display and M suggestions on Facebook Messenger. https://twitter.com/fb_engineering/status/1073629026072256512 How is PyText useful for Facebook The PyText framework is used for tasks like document classification, semantic parsing, sequence tagging and multitask modeling. This framework easily fits into research and production workflows and emphasizes on robustness and low-latency to meet Facebook’s real-time NLP needs. PyText is also responsible for models powering more than a billion daily predictions at Facebook. This framework addresses the conflicting requirements of enabling rapid experimentation and serving models at scale by providing simple interfaces and abstractions for model components. It uses PyTorch’s capabilities of exporting models for inference through optimized Caffe2 execution engine. Features of PyText PyText features production-ready models for various NLP/NLU tasks such as text classifiers, sequence taggers, etc. PyText comes with a distributed-training support, built on the new C10d backend in PyTorch 1.0. It comes with training support and also features extensible components that help in creating new models and tasks. The framework’s modularity, allows it to create new pipelines from scratch and modify the existing workflows. It comes with a simplified workflow for faster experimentation. It gives an access to a rich set of prebuilt model architectures for text processing and vocabulary management. Serve as an end-to-end platform for developers. Its modular structure helps engineers to incorporate individual components into existing systems. Added support for string tensors to work efficiently with text in both training and inference. PyText for NLP development PyText improves the workflow for NLP and supports distributed training for speeding up NLP experiments that require multiple runs. Easily portable The PyText models can be easily shared across different organizations in the AI community. Prebuilt models With a model focused on NLP tasks, such as text classification, word tagging, semantic parsing, and language modeling, this framework makes it possible to use pre-built models on new data, easily. Contextual models For improving the conversational understanding in various NLP tasks, PyText uses the contextual information, such as an earlier part of a conversation thread. There are two contextual models in PyText, a SeqNN model for intent labeling tasks and a Contextual Intent Slot model for joint training on both tasks. PyText exports models to Caffe2 PyText uses PyTorch 1.0’s capability to export models for inference through the optimized Caffe2 execution engine. Native PyTorch models require Python runtime, which is not scalable because of the multithreading limitations of Python’s Global Interpreter Lock. Exporting to Caffe2 provides efficient multithreaded C++ backend for serving huge volumes of traffic efficiently. PyText’s capabilities to test new state-of-the-art models will be improved further in the next release. Since, putting sophisticated NLP models on mobile devices is a big challenge, the team at Facebook AI research will work towards building an end-to-end workflow for on-device models. The team plans to include supporting multilingual modeling and other modeling capabilities. They also plan to make models easier to debug, and might also add further optimizations for distributed training. “PyText has been a collaborative effort across Facebook AI, including researchers and engineers focused on NLP and conversational AI, and we look forward to working together to enhance its capabilities,” said the Facebook AI research team. Users are excited about this news and want to explore more. https://twitter.com/ezylryb_/status/1073893067705409538 https://twitter.com/deliprao/status/1073671060585799680 To know about this in detail, check out the release notes on GitHub. Facebook contributes to MLPerf and open sources Mask R-CNN2Go, its CV framework for embedded and mobile devices Facebook retires its open source contribution to Nuclide, Atom IDE, and other associated repos Australia’s ACCC publishes a preliminary report recommending Google Facebook be regulated and monitored for discriminatory and anti-competitive behavior
Read more
  • 0
  • 0
  • 17631

article-image-webassembly-comes-to-qt-now-you-can-deploy-your-next-qt-app-in-browser
Kunal Chaudhari
31 May 2018
2 min read
Save for later

WebAssembly comes to Qt. Now you can deploy your next Qt app in browser

Kunal Chaudhari
31 May 2018
2 min read
When Qt 5.11 was released last week, a technology preview of Qt for WebAssembly was released along with it, allowing developers to run Qt applications directly inside the browser window. WebAssembly is a brand-new technology that represents a paradigm shift in web development. Leveraging this technology web developers can write high-performance applications that can run directly in the browser. A common misconception about WebAssembly is that it will eventually replace JavaScript, but the fact is that it’s intended to be used alongside JavaScript. WebAssembly is nothing but a bytecode format which is executed in a web browser. This allows an application to be deployed to a device with a compliant web browser without going through any explicit installation steps. WebAssembly is now supported by all major web browsers as a binary format for allowing executable code in web pages that is nearly as fast as native machine code. Qt uses Emscripten, an open source LLVM to JavaScript compiler. Emscripten complies Qt applications so they can run in a web browser from a web server. Deploying applications on multiple platforms is a tedious task, with WebAssembly developers need to just compile and deploy on a web server for any platform that has a browser. This feature comes in handy especially for enterprise users with multiple clients who are using different platforms. These enterprises can use Qt for WebAssembly to compile their Qt or Quick app and deploy once. Adding support for WebAssembly is a huge step forward for Qt in order to become a truly cross-platform framework. The project is currently in beta and has been released as a technology preview. You can visit the official Qt blog for more information. Qt 5.11 has arrived! How to create multithreaded applications in Qt 3 ways to deploy a QT and OpenCV application
Read more
  • 0
  • 0
  • 17629
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-amazon-sagemaker-continues-to-lead-the-way-in-machine-learning-and-announces-up-to-18-lower-prices-on-gpu-instances-from-aws-news-blog
Matthew Emerick
07 Oct 2020
11 min read
Save for later

Amazon SageMaker Continues to Lead the Way in Machine Learning and Announces up to 18% Lower Prices on GPU Instances from AWS News Blog

Matthew Emerick
07 Oct 2020
11 min read
Since 2006, Amazon Web Services (AWS) has been helping millions of customers build and manage their IT workloads. From startups to large enterprises to public sector, organizations of all sizes use our cloud computing services to reach unprecedented levels of security, resiliency, and scalability. Every day, they’re able to experiment, innovate, and deploy to production in less time and at lower cost than ever before. Thus, business opportunities can be explored, seized, and turned into industrial-grade products and services. As Machine Learning (ML) became a growing priority for our customers, they asked us to build an ML service infused with the same agility and robustness. The result was Amazon SageMaker, a fully managed service launched at AWS re:Invent 2017 that provides every developer and data scientist with the ability to build, train, and deploy ML models quickly. Today, Amazon SageMaker is helping tens of thousands of customers in all industry segments build, train and deploy high quality models in production: financial services (Euler Hermes, Intuit, Slice Labs, Nerdwallet, Root Insurance, Coinbase, NuData Security, Siemens Financial Services), healthcare (GE Healthcare, Cerner, Roche, Celgene, Zocdoc), news and media (Dow Jones, Thomson Reuters, ProQuest, SmartNews, Frame.io, Sportograf), sports (Formula 1, Bundesliga, Olympique de Marseille, NFL, Guiness Six Nations Rugby), retail (Zalando, Zappos, Fabulyst), automotive (Atlas Van Lines, Edmunds, Regit), dating (Tinder), hospitality (Hotels.com, iFood), industry and manufacturing (Veolia, Formosa Plastics), gaming (Voodoo), customer relationship management (Zendesk, Freshworks), energy (Kinect Energy Group, Advanced Microgrid Systems), real estate (Realtor.com), satellite imagery (Digital Globe), human resources (ADP), and many more. When we asked our customers why they decided to standardize their ML workloads on Amazon SageMaker, the most common answer was: “SageMaker removes the undifferentiated heavy lifting from each step of the ML process.” Zooming in, we identified five areas where SageMaker helps them most. #1 – Build Secure and Reliable ML Models, Faster As many ML models are used to serve real-time predictions to business applications and end users, making sure that they stay available and fast is of paramount importance. This is why Amazon SageMaker endpoints have built-in support for load balancing across multiple AWS Availability Zones, as well as built-in Auto Scaling to dynamically adjust the number of provisioned instances according to incoming traffic. For even more robustness and scalability, Amazon SageMaker relies on production-grade open source model servers such as TensorFlow Serving, the Multi-Model Server, and TorchServe. A collaboration between AWS and Facebook, TorchServe is available as part of the PyTorch project, and makes it easy to deploy trained models at scale without having to write custom code. In addition to resilient infrastructure and scalable model serving, you can also rely on Amazon SageMaker Model Monitor to catch prediction quality issues that could happen on your endpoints. By saving incoming requests as well as outgoing predictions, and by comparing them to a baseline built from a training set, you can quickly identify and fix problems like missing features or data drift. Says Aude Giard, Chief Digital Officer at Veolia Water Technologies: “In 8 short weeks, we worked with AWS to develop a prototype that anticipates when to clean or change water filtering membranes in our desalination plants. Using Amazon SageMaker, we built a ML model that learns from previous patterns and predicts the future evolution of fouling indicators. By standardizing our ML workloads on AWS, we were able to reduce costs and prevent downtime while improving the quality of the water produced. These results couldn’t have been realized without the technical experience, trust, and dedication of both teams to achieve one goal: an uninterrupted clean and safe water supply.” You can learn more in this video. #2 – Build ML Models Your Way When it comes to building models, Amazon SageMaker gives you plenty of options. You can visit AWS Marketplace, pick an algorithm or a model shared by one of our partners, and deploy it on SageMaker in just a few clicks. Alternatively, you can train a model using one of the built-in algorithms, or your own code written for a popular open source ML framework (TensorFlow, PyTorch, and Apache MXNet), or your own custom code packaged in a Docker container. You could also rely on Amazon SageMaker AutoPilot, a game-changing AutoML capability. Whether you have little or no ML experience, or you’re a seasoned practitioner who needs to explore hundreds of datasets, SageMaker AutoPilot takes care of everything for you with a single API call. It automatically analyzes your dataset, figures out the type of problem you’re trying to solve, builds several data processing and training pipelines, trains them, and optimizes them for maximum accuracy. In addition, the data processing and training source code is available in auto-generated notebooks that you can review, and run yourself for further experimentation. SageMaker Autopilot also now creates machine learning models up to 40% faster with up to 200% higher accuracy, even with small and imbalanced datasets. Another popular feature is Automatic Model Tuning. No more manual exploration, no more costly grid search jobs that run for days: using ML optimization, SageMaker quickly converges to high-performance models, saving you time and money, and letting you deploy the best model to production quicker. “NerdWallet relies on data science and ML to connect customers with personalized financial products“, says Ryan Kirkman, Senior Engineering Manager. “We chose to standardize our ML workloads on AWS because it allowed us to quickly modernize our data science engineering practices, removing roadblocks and speeding time-to-delivery. With Amazon SageMaker, our data scientists can spend more time on strategic pursuits and focus more energy where our competitive advantage is—our insights into the problems we’re solving for our users.” You can learn more in this case study. Says Tejas Bhandarkar, Senior Director of Product, Freshworks Platform: “We chose to standardize our ML workloads on AWS because we could easily build, train, and deploy machine learning models optimized for our customers’ use cases. Thanks to Amazon SageMaker, we have built more than 30,000 models for 11,000 customers while reducing training time for these models from 24 hours to under 33 minutes. With SageMaker Model Monitor, we can keep track of data drifts and retrain models to ensure accuracy. Powered by Amazon SageMaker, Freddy AI Skills is constantly-evolving with smart actions, deep-data insights, and intent-driven conversations.“ #3 – Reduce Costs Building and managing your own ML infrastructure can be costly, and Amazon SageMaker is a great alternative. In fact, we found out that the total cost of ownership (TCO) of Amazon SageMaker over a 3-year horizon is over 54% lower compared to other options, and developers can be up to 10 times more productive. This comes from the fact that Amazon SageMaker manages all the training and prediction infrastructure that ML typically requires, allowing teams to focus exclusively on studying and solving the ML problem at hand. Furthermore, Amazon SageMaker includes many features that help training jobs run as fast and as cost-effectively as possible: optimized versions of the most popular machine learning libraries, a wide range of CPU and GPU instances with up to 100GB networking, and of course Managed Spot Training which lets you save up to 90% on your training jobs. Last but not least, Amazon SageMaker Debugger automatically identifies complex issues developing in ML training jobs. Unproductive jobs are terminated early, and you can use model information captured during training to pinpoint the root cause. Amazon SageMaker also helps you slash your prediction costs. Thanks to Multi-Model Endpoints, you can deploy several models on a single prediction endpoint, avoiding the extra work and cost associated with running many low-traffic endpoints. For models that require some hardware acceleration without the need for a full-fledged GPU, Amazon Elastic Inference lets you save up to 90% on your prediction costs. At the other end of the spectrum, large-scale prediction workloads can rely on AWS Inferentia, a custom chip designed by AWS, for up to 30% higher throughput and up to 45% lower cost per inference compared to GPU instances. Lyft, one of the largest transportation networks in the United States and Canada, launched its Level 5 autonomous vehicle division in 2017 to develop a self-driving system to help millions of riders. Lyft Level 5 aggregates over 10 terabytes of data each day to train ML models for their fleet of autonomous vehicles. Managing ML workloads on their own was becoming time-consuming and expensive. Says Alex Bain, Lead for ML Systems at Lyft Level 5: “Using Amazon SageMaker distributed training, we reduced our model training time from days to couple of hours. By running our ML workloads on AWS, we streamlined our development cycles and reduced costs, ultimately accelerating our mission to deliver self-driving capabilities to our customers.“ #4 – Build Secure and Compliant ML Systems Security is always priority #1 at AWS. It’s particularly important to customers operating in regulated industries such as financial services or healthcare, as they must implement their solutions with the highest level of security and compliance. For this purpose, Amazon SageMaker implements many security features, making it compliant with the following global standards: SOC 1/2/3, PCI, ISO, FedRAMP, DoD CC SRG, IRAP, MTCS, C5, K-ISMS, ENS High, OSPAR, and HITRUST CSF. It’s also HIPAA BAA eligible. Says Ashok Srivastava, Chief Data Officer, Intuit: “With Amazon SageMaker, we can accelerate our Artificial Intelligence initiatives at scale by building and deploying our algorithms on the platform. We will create novel large-scale machine learning and AI algorithms and deploy them on this platform to solve complex problems that can power prosperity for our customers.” #5 – Annotate Data and Keep Humans in the Loop As ML practitioners know, turning data into a dataset requires a lot of time and effort. To help you reduce both, Amazon SageMaker Ground Truth is a fully managed data labeling service that makes it easy to annotate and build highly accurate training datasets at any scale (text, image, video, and 3D point cloud datasets). Says Magnus Soderberg, Director, Pathology Research, AstraZeneca: “AstraZeneca has been experimenting with machine learning across all stages of research and development, and most recently in pathology to speed up the review of tissue samples. The machine learning models first learn from a large, representative data set. Labeling the data is another time-consuming step, especially in this case, where it can take many thousands of tissue sample images to train an accurate model. AstraZeneca uses Amazon SageMaker Ground Truth, a machine learning-powered, human-in-the-loop data labeling and annotation service to automate some of the most tedious portions of this work, resulting in reduction of time spent cataloging samples by at least 50%.” Amazon SageMaker is Evaluated The hundreds of new features added to Amazon SageMaker since launch are testimony to our relentless innovation on behalf of customers. In fact, the service was highlighted in February 2020 as the overall leader in Gartner’s Cloud AI Developer Services Magic Quadrant. Gartner subscribers can click here to learn more about why we have an overall score of 84/100 in their “Solution Scorecard for Amazon SageMaker, July 2020”, the highest rating among our peer group. According to Gartner, we met 87% of required criteria, 73% of preferred, and 85% of optional. Announcing a Price Reduction on GPU Instances To thank our customers for their trust and to show our continued commitment to make Amazon SageMaker the best and most cost-effective ML service, I’m extremely happy to announce a significant price reduction on all ml.p2 and ml.p3 GPU instances. It will apply starting October 1st for all SageMaker components and across the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), EU (London), Canada (Central), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Tokyo), Asia Pacific (Mumbai), and AWS GovCloud (US-Gov-West). Instance Name Price Reduction ml.p2.xlarge -11% ml.p2.8xlarge -14% ml.p2.16xlarge -18% ml.p3.2xlarge -11% ml.p3.8xlarge -14% ml.p3.16xlarge -18% ml.p3dn.24xlarge -18% Getting Started with Amazon SageMaker As you can see, there are a lot of exciting features in Amazon SageMaker, and I encourage you to try them out! Amazon SageMaker is available worldwide, so chances are you can easily get to work on your own datasets. The service is part of the AWS Free Tier, letting new users work with it for free for hundreds of hours during the first two months. If you’d like to kick the tires, this tutorial will get you started in minutes. You’ll learn how to use SageMaker Studio to build, train, and deploy a classification model based on the XGBoost algorithm. Last but not least, I just published a book named “Learn Amazon SageMaker“, a 500-page detailed tour of all SageMaker features, illustrated by more than 60 original Jupyter notebooks. It should help you get up to speed in no time. As always, we’re looking forward to your feedback. Please share it with your usual AWS support contacts, or on the AWS Forum for SageMaker. - Julien
Read more
  • 0
  • 0
  • 17629

article-image-microsoft-azures-new-governance-dapp-an-enterprise-blockchain-without-mining
Prasad Ramesh
09 Aug 2018
2 min read
Save for later

Microsoft Azure’s new governance DApp: An enterprise blockchain without mining

Prasad Ramesh
09 Aug 2018
2 min read
Microsoft Azure has just released a Blockchain-as-a-Service product that uses Ethereum to support blockchain with a set of templates to deploy and configure your choice of blockchain network. This can be done with minimal Azure and blockchain knowledge. The conventional blockchain in the open is based on Proof-of-Work (PoW) and requires mining as the parties do not trust each other. An enterprise blockchain does not require PoW but is based on Proof-of-Authority (PoA) where approved identities or validators on a blockchain, validate the transactions on the blockchain. The PoA product features a decentralized application (DApp) called the Governance DApp. Blockchains in this new model can be deployed in 5-45 minutes depending on the size and complexity of the network. The PoA network comes with security features such as identity leasing system to ensure no two nodes carry the same identity. There are also other features to achieve good performance. Web assembly smart contracts: Solidity is cited as one of the pain areas when developing smart contracts on Ethereum. This feature allows developers to use familiar languages such as C, C++, and Rust. Azure Monitor: Used to track node and network statistics. Developers can view the underlying blockchain to track statistics while the network admins can detect and prevent network outages. Extensible governance: With this feature, customers can participate in a consortium without managing the network infrastructure. It can be optionally delegated to an operator of their choosing. Governance DApp: Provides a decentralized governance in which network authority changes are administered via on-chain voting done by select administrators. It also contains validator delegation for authorities to manage their validator nodes that are set up in each PoA deployment. Users can audit change history, each change is recorded, providing transparency and auditability. Source: Microsoft Blog Along with these features, the Governance DApp will also ensure each consortium member has control over their own keys. This enables secure signing on a wallet chosen by the user. The blog mentions “In the case of a VM or regional outage, new nodes can quickly spin up and resume the previous nodes’ identities.” To know more visit the official Microsoft Blog. Read next Automate tasks using Azure PowerShell and Azure CLI [Tutorial] Microsoft announces general availability of Azure SQL Data Sync Microsoft supercharges its Azure AI platform with new features
Read more
  • 0
  • 0
  • 17616

article-image-exciting-new-features-in-c-8-0
Richa Tripathi
12 Apr 2018
3 min read
Save for later

Exciting New Features in C# 8.0

Richa Tripathi
12 Apr 2018
3 min read
It’s been more than 20 years since Microsoft released the first version of the C# language. Over the years C# has experienced a remarkable evolution, from being called as a Java copycat to one of the most loved and used programming languages. The current developments in C# 7 ecosystem are exciting, and grabbing the attention of developers, but what about the future? Can developers take a sneak peek into the future of C#? Well of course they can! The Microsoft language design team have been developing the language features ‘in the open’ for quite some time now. They have proposed several new features for the upcoming C# 8.0 and have released several prototypes for the developers, to try them out and provide feedback on the official Github repo. Let’s take a look at the most likely new C# 8 features: Nullable Reference Types The name of this particular feature might confuse a lot of developers wondering “Isn’t nullable reference a bad idea?” or “Shouldn’t it be called non nullable reference types?”. Sir Tony Hoare, a British computer scientist invented null references and famously called them the “Billion Dollar Mistake” as the biggest problem is, of course, the risk of getting the infamous null-reference exception. Since all reference types in C# can be null, you always run the risk of getting an exception when you try to access some member of the object. Functional languages try to deal with this problem by having a type that represents the concept of potential absent value. Instead of introducing non-nullable reference types in C#, Microsoft has chosen to consider reference types as non-nullable by default and provide mechanisms to deal with nullable types. Since the premise of a reference is often considered to be non-nullable and to be dereferenced. Asynchronous Streams Asynchronous streams provide the ability to use async/await inside an iterator. In most cases, an iterator is implemented synchronously. There are some cases, however, where it might need to await a call on every iteration to fetch the next item. In such cases, asynchronous streams can come in handy. To support this feature, a couple of things need to be taken care of: New types, the async equivalents of IEnumerable and IEnumerator Ability to specify an await on an iterator construct such as foreach. Default interface implementations: The primary use case for default interface methods is to enable the developer to safely evolve an interface. You could add new methods to it and, as long as you provide a default implementation, existing clients of the interface wouldn’t be forced to implement it. Another important value proposition of default implementation on interfaces relates to Android and iOS. Since both Java and Swift offer this feature, it’s tricky to use C# to wrap Android/iOS APIs that make use of default interface implementations. C# 8.0 will make it possible to wrap those APIs more faithfully. There are plenty of other features proposed to be implemented in C# 8 such as target-typed new expressions, covariant return types, and extension everything. All these features are in different stages of development and a lot can (and probably will) change from now until C# 8.0’s release Till then you can closely follow the official Github repo for C#.   Stay up to date with the latest features, release dates, and general discussions on key programming tools and tech by subscribing to Packt Hub. Check out other latest news: Red Programming Language for Android development! What’s new in Visual Studio 1.22 OpenCV 4.0 is on schedule for July release
Read more
  • 0
  • 0
  • 17608

article-image-googles-cloud-robotics-platform-to-be-launched-in-2019-will-combine-the-power-of-ai-robotics-and-the-cloud
Melisha Dsouza
25 Oct 2018
3 min read
Save for later

Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud

Melisha Dsouza
25 Oct 2018
3 min read
Earlier this week, Google announced its plans to launch a ‘Cloud Robotics platform’ for developers in 2019. Since the early onset of ‘cloud robotics’ in the year 2010, Google has explored various aspects of the cloud robotics field. Now, with the launch of Cloud Robotics platform, Google will combine the power of AI, robotics and the cloud to deploy cloud-connected collaborative robots. The platform will encourage efficient robotic automation in highly dynamic environments. The core infrastructure of the Platform will be open source and users will pay only for what services they use. Features of Cloud Robotics platform: #1 Critical infrastructure The platform will introduce secure and robust connectivity between robots and the cloud. Kubernetes will be used for the management and distribution of digital assets. Stackdriver will assist with the logging, monitoring, alerting, and dashboarding processes. Developers will gain access to Google’s data management and AI capabilities, ranging from Cloud Bigtable to Cloud AutoML. The standardized data types and open APIs will help developers build reusable automation components. Moreover, open APIs support interoperability, which means integrators can compose end-to-end solutions with collaborative robots from different vendors. #2 Specialized tools The tools provided with this platform will help developers to build, test, and deploy software for robots with ease. Composing and deploying automation solutions in customers’ environments through system integrators can be done easily. Operators can monitor robot fleets and ongoing missions, as well. Plus, users have to only pay for the services they use. That being said, if a user decides to move to another cloud provider, they can take their data with them! #3 Fostering powerful first-party services and third-partyy innovation Google’s initial Cloud Robotics services can be applied to various use cases like robot localization and object tracking. The services will process sensor data from multiple sources and use machine learning to obtain information and insights about the state of the physical world. It will encourage an ecosystem of hardware, and applications, that can be used and re-used for collaborative automation. #4 Industrial Automation made easy Industrial automation requires extensive custom integration. Collaborative robots can help improve flexibility of the overall process.  It will help save costs and vendor lock ins. That being said, it is difficult to program robots to understand and react to the unpredictable changes of the physical human world. The Google Cloud platform will solve these issues by providing flexible automation services like Cartographer service, Spatial Intelligence service and Object Intelligence service Watch this video to know more about these services: https://www.youtube.com/watch?v=eo8MzGIYGzs&feature=youtu.be Alternatively, head over to Google's Blog to know more about this announcement. What’s new in Google Cloud Functions serverless platform Cloud Filestore: A new high performance storage option by Google Cloud Platform Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence  
Read more
  • 0
  • 0
  • 17607
article-image-google-cloud-hands-over-kubernetes-project-operations-to-cncf-grants-9m-in-gcp-credits
Sugandha Lahoti
30 Aug 2018
3 min read
Save for later

Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits

Sugandha Lahoti
30 Aug 2018
3 min read
Google today announced that it is stepping back from managing the Kubernetes architecture and is funding the Cloud Native Computing Foundation (CNCF) $9M in GCP credits for a successful transition. These credits are split over a period of three years to cover infrastructure costs. Google is also handing over operational control of the Kubernetes project to the CNCF community. They will now take ownership of day-to-day operational tasks such as testing and builds, as well as maintaining and operating the image repository and download infrastructure. Kubernetes was first created by Google in 2014. Since then Google has been providing Kubernetes with the cloud resources that support the project development. These include CI/CD testing infrastructure, container downloads, and other services like DNS, all running on Google Cloud Platform. With Google passing the reign to CNCF, it’s goal is to make make sure “Kubernetes is ready to scale when your enterprise needs it to”. The $9M grant will be dedicated to building the world-wide network and storage capacity required to serve container downloads. In addition, a large part of this grant will also be dedicated to funding scalability testing, which runs 150,000 containers across 5,000 virtual machines. “Since releasing Kubernetes in 2014, Google has remained heavily involved in the project and actively contributes to its vibrant community. We also believe that for an open source project to truly thrive, all aspects of a mature project should be maintained by the people developing it. In passing the baton of operational responsibilities to Kubernetes contributors with the stewardship of the CNCF, we look forward to seeing how the project continues to evolve and experience breakneck adoption” said Sarah Novotny, Head of Open Source Strategy for Google Cloud. The CNCF foundation includes a large number of companies of the likes of Alibaba Cloud, AWS, Microsoft Azure, IBM Cloud, Oracle, SAP etc. All of these will be profiting from the work of the CNCF and the Kubernetes community. With this move, Google is perhaps also transferring the load of running the Kubernetes infrastructure to these members. As mentioned in their blog post, they look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project’s operations. To learn more, check out the CNCF announcement post and the Google Cloud Platform blog. Kubernetes 1.11 is here! Google Kubernetes Engine 1.10 is now generally available and ready for enterprise use. Kubernetes Container 1.1 Integration is now generally available.
Read more
  • 0
  • 0
  • 17607

article-image-meet-carlo-a-web-rendering-surface-for-node-applications-by-the-google-chrome-team
Bhagyashree R
02 Nov 2018
2 min read
Save for later

Meet Carlo, a web rendering surface for Node applications by the Google Chrome team

Bhagyashree R
02 Nov 2018
2 min read
Yesterday, the Google Chrome team introduced Carlo, a web rendering surface for Node applications. Carlo provides rich rendering capabilities powered by the Google Chrome browser to Node applications. Using Puppeteer it is able to communicate with the locally installed browser instance. Puppeteer is also a Google Chrome project that comes with a high-level API to control Chrome or Chromium over the DevTools Protocol. Why Carlo is introduced? Carlo aims to show how the locally installed browser can be used with Node out-of-the-box. The advantage of using Carlo over Electron is that Node v8 and Chrome v8 engines are decoupled in Carlo. This provides a maintainable model that allows independent updates of the underlying components. In short, Carlo gives you more control over bundling. What you can do with Carlo? Carlo enables you to create hybrid applications that use Web stack for rendering and Node for capabilities. You can do the following with it: Using the web rendering stack, you can visualize dynamic state of your Node applications. Expose additional system capabilities accessible from Node to your web applications. Package your application into a single executable using the command-line interface, pkg. How does it work? It’s working involve three steps: First, Carlo checks whether Google Chrome is installed locally or not It then launches Google Chrome and establishes a connection to it over the process pipe Finally, exposes high-level API for rendering in Chrome In case of those users who do not have Chrome installed, Carlo prints an error message. It supports all Chrome Stable channel, versions 70.* and Node v7.6.0 onwards. You can install and get started with it by executing the following command: npm i carlo Read the full description on Carlo’s GitHub repository. Node v11.0.0 released npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 17581

article-image-welcome-express-gateway-1-11-0-a-microservices-api-gateway-on-express-js
Bhagyashree R
24 Aug 2018
2 min read
Save for later

Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js

Bhagyashree R
24 Aug 2018
2 min read
Express Gateway 1.11.0 has been released after adding an important feature for the proxy policy and some bug fixes. Express Gateway is a simple, agnostic, organic, and portable, microservices API Gateway built on Express.js. What is new in this version? Additions New parameter called stripPath: Support for a new parameter called stripPath has been added to the Proxy Policy for Express Gateway. Its default value is false. You can now completely own both the URL space of your backend server as well the one exposed by Express Gateway. Official Helm chart: An official Helm chart has been added that enables you to install Express Gateway on your Rancher or Kubernetes Cluster with a single command. Bug Fixes The base condition schema is now correctly returned by the /schemas Admin API Endpoint so that the external clients can use it and resolve its references correctly. Previously, invalid configuration could be sent to the gateway through the Admin API when using Express Gateway in production. The gateway was correctly validating the gateway.config content, but it wasn't validating all the policies inside it. This bug fix was done to  make sure when an Admin API call that is modifying the configuration is done, the validation should be triggered so that we do not persist on disk a broken configuration file. Fixed a missing field in oauth2-introspect JSON Schema. For maintaining consistency, the keyauth schema name is now correctly named key-auth. Miscellaneous changes Unused migration framework has been removed. The X-Powered-By header is now disabled for security reasons. The way of starting Express Gateway in official Docker file is changed. Express Gateway is not wrapped in a bash command before being run. The reason is that the former command allocates an additional /bin/sh process, the latter does not. In this article we looked through some of the updates introduced in Express Gateway 1.11.0. To know more on this new update head over to their GitHub repo. API Gateway and its need Deploying Node.js apps on Google App Engine is now easy How to build Dockers with microservices
Read more
  • 0
  • 0
  • 17579
article-image-vue-2-6-is-now-out-with-a-new-unified-syntax-for-slots-and-more
Bhagyashree R
05 Feb 2019
3 min read
Save for later

Vue 2.6 is now out with a new unified syntax for slots, and more

Bhagyashree R
05 Feb 2019
3 min read
On the eve of Vue’s fifth anniversary yesterday, its creator, Evan You, announced the release of Vue 2.6. This version, which goes by the name “Marcoss”, comes with a new syntax for slots usage and also few performance improvements,  internal changes, and more. Following are some of the highlights in Vue 2.6: A new syntax for slots usage Vue provides the slots feature to enable flexible component composition. Vue 2.6 introduces a new unified syntax for named and scoped slots usage. The new directive v-slot combines slot and slot-scope, which are now deprecated, in a single directive syntax. This update is fully backward compatible. According to the RFC, slot-scope will be soft-deprecated. This means that it will be marked deprecated in the docs and developers will be encouraged to use the new syntax, but no deprecation messages will be shown for now. The team is planning to eventually remove slot-scope in Vue 3.0. Performance improvements in slots The rendering of normal slots happens during the parent’s render cycle, hence, when the dependency of a slot changes, it causes both the parent and child components to re-render. This version comes with an optimization to avoid this re-rendering and ensure that the parent scope dependency mutations only affect the parent. Now, the child component will not be forced to update if it uses scoped slots. If you use the new v-slot syntax, then the slots will be compiled into scoped slots. This essentially means that all slots will get the performance advantage that comes with scoped slots. Normal slots are now exposed on this.$scopedSlots as functions. Developers using the render functions instead of templates can now always use this.$scopedSlots, without having to worry about the type of slots being passed in. Async error handling in Vue Vue’s error handling mechanism, which includes in-component errorCaptured hook and the global errorHandler hook, now also capture errors inside v-on handlers. Additionally, if any of your life cycle hooks or event handlers perform asynchronous operations, you can now return a Promise from the function. This will ensure that any uncaught error from that Promise chain is also sent to your error handlers. Data pre-fetching during server-side rendering This version comes with a new serverPrefetch hook that enables any component, except route-level components, to pre-fetch data during server-side rendering. This is introduced to allow more flexible usage and reduce the coupling between data fetching and the router. Dynamic Directive Arguments Earlier, users had to use argument-less object binding in order to leverage dynamic keys as the directive arguments were static. Dynamic JavaScript expressions are now supported in directive arguments. To read more in detail, check out Evan You’s official announcement on Medium. Learning Vue in 2019 with Anthony Gore’s developer knowledge map Evan You shares Vue 3.0 updates at VueConf Toronto 2018 Vue.js 3.0 is ditching JavaScript for TypeScript. What else is new?
Read more
  • 0
  • 0
  • 17577

article-image-xamarin-forms-3-the-popular-cross-platform-ui-toolkit-is-here
Sugandha Lahoti
10 May 2018
3 min read
Save for later

Xamarin Forms 3, the popular cross-platform UI Toolkit, is here!

Sugandha Lahoti
10 May 2018
3 min read
Cross-platform app development is the rage now! And to add fuel to the fire, Xamarin has released its latest cross-platform toolkit upgrade. The latest stable release of Xamarin Forms 3 is here! Version 3 hosts new layout and styling updates to improve how developers build UI. These include updates to Visual State Manager, Flex Layout, Style Sheets, and Right-to-Left support to name a few. XAML compilation has also received specific attention with build times reduced by as much as 88% in some benchmarks. Let us look at each of the above features in detail. Visual State Manager Visual State Manager is now available in Xamarin Forms. The VSM provides a structured way to make visual changes to the user interface from code. The VSM introduces the concept of visual states. Visual states are collected in visual state groups. Developers can now define the various states for layouts and controls declaratively in XAML or C# and easily update their UI. The Xamarin.Forms Visual State Manager defines one visual state group named "CommonStates" with three visual states: Normal Disabled Focused FlexLayout FlexLayout is a new layout inspired by the web’s Flexbox. FlexLayout promotes flat, performant, and flexible UIs. It is ideal for handling distribution and spacing of content within layouts. It also provides control of the direction of layout, the justification, and alignment among other properties. FlexLayout defines six public bindable properties and five attached bindable properties that affect the size, orientation, and alignment of its child elements. StyleSheets Xamarin.Forms 3.0 introduces the ability to style an app using CSS. StyleSheets come in companionship with Flex Layouts. A style sheet consists of a list of rules, with each rule consisting of one or more selectors and a declaration block. They can be added as separate CSS files or inline with Resources. In Xamarin.Forms, CSS style sheets are parsed and evaluated at runtime, rather than compile time, and are re-parsed on use. Right-To-Left Localization Xamarin.Forms 3.0 are now equipped with FlowDirection property to make it easier to flip layouts to match language direction.  This is especially beneficial to Arabic and Hebrew scripts that flow from right-to-left. FlowDirection property apart from supporting right-to-left layouts also offers flexibility to customize layouts as seen fit by developers. Xamarin.Forms 3.0 is now available on NuGet. Read the full release notes for the list of entire bug fixes. Five reasons why Xamarin will change mobile development Hybrid Mobile apps: What you need to know Creating Hello World in Xamarin.Forms_sample
Read more
  • 0
  • 0
  • 17567
Modal Close icon
Modal Close icon