Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-introducing-azure-devops-server-2019-rc1-with-better-ui-azure-sql-support-and-more
Melisha Dsouza
22 Nov 2018
3 min read
Save for later

Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more!

Melisha Dsouza
22 Nov 2018
3 min read
On 19th November, Microsoft announced the first release candidate (RC) of Azure DevOps Server 2019. Azure DevOps Server 2019 includes the new, fast, and clean Azure DevOps user interface and delivers the codebase of Microsoft Azure DevOps while being optimized for customers who prefer to self-host. Features of Azure DevOps Server 2019 In addition to existing SQL Server support, Azure DevOps Server also includes support for Azure SQL. This means customers can self-host Azure DevOps in their own datacenter using an on-premises SQL Server. They can take advantage of  Azure SQL capabilities and performance like backup features and scaling options, while reducing the administrative overhead of running the service and  self-hosting Azure DevOps in the cloud. Customers can use the globally available Microsoft hosted service to take advantage of automatic updates and automatic scaling of Azure DevOps Server. Azure DevOps Server 2019 includes a new release management interface. Customers can easily understand how their deployment is taking place - it gives them a better visibility of which bits are deployed to which environments and why. Customers can also mix and match agents self-hosted on-premises and in any cloud on Windows, Mac, or Linux while easily deploying to IaaS or PaaS in Azure as well as on-premises infrastructure. A new navigation and improved user experience in Azure DevOps A new feature introduced is ‘my work flyout’. This feature was developed after feedback obtained that when a customer is in one part of the product and wants some information from another part, they didn’t want to lose their context of the current task. With this new feature, customers can access this flyout from anywhere in the product, which will give them a quick glance at crucial information link work items, pull requests, and all favorites. Teams that use pull requests (PRs) and branch policies, there may be occasions when members need to override and bypass those policies. For teams to verify that those policy overrides are being used in the right situations, a new notification filter is added to allow users and teams to receive email alerts any time a policy is bypassed. Tests tab now gives a rich, in-context test information for Pipelines. It provides an in-progress test view, full page debugging experience, in context test history, reporting aborted test execution, and run level summary. The UI has undergone significant testing and the team suggests that for self-hosting customers the new navigation model may require updates to internal documentation and training. A direct upgrade to Azure DevOps Server is supported fromTeamm Foundation Server 2012 and newer. Previous versions of Team Foundation Server will stay on the old user interface. Check the Azure DevOps Server requirements and compatibility page to understand dependencies required for a self-hosted installation. Head over to Microsoft’s blog for more information on this news. You can download Azure DevOps Server 2019 RC1 and check out the release notes for all the features and information for this release. A multi-factor authentication outage strikes Microsoft Office 365 and Azure users Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report  
Read more
  • 0
  • 0
  • 18294

article-image-aws-updates-the-face-detection-analysis-and-recognition-capabilities-in-amazon-rekognition
Natasha Mathur
22 Nov 2018
2 min read
Save for later

AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition

Natasha Mathur
22 Nov 2018
2 min read
The AWS team announced updates to the face detection, analysis, and recognition features in its deep learning-based service, Amazon Rekognition, yesterday, which makes it easy to add images and video analysis to your applications. These updates are now available in Rekognition at no extra cost. Moreover, there is no machine learning experience required. These updates will provide customers with an enhanced ability to detect more faces from the images (even the difficult ones), perform more accurate face matches, as well as obtain the improved age, gender, and emotion attributes for the faces in images. Amazon Rekognition can now detect 40% more faces, and the face recognition feature produces 30% more correct best matches. The rate of false detections has also dropped down by 50%. Additionally, face matches now have more consistent similarity scores that vary across lighting, pose, and appearance, letting the customers use higher confidence thresholds, avoid false matches and reduce human review in identity verification applications. Face detection algorithms usually suffer difficulty when it comes to detecting faces in images with challenging aspects. These challenging aspects include pose variations (caused by head movement or camera movements, difficult lighting (low contrast and shadows, washed out faces), and a blur or occlusion (faces covered by hat, hair, or hands). Pose variation issue is generally encountered in faces that have been captured from acute camera angles (shots taken from above or below a face), shots with a side-on view of a face, or when the subject is looking away. This particular issue is typically seen in social media photos, selfies, or fashion photoshoots. Lighting issue is common in stock photography and at event venues where there isn’t enough contrast between facial features and the background in low lighting. Occlusion is seen in photos with artistic effects (selfies or fashion photos, video motion blur), fashion photography or photos taken from identity documents. With the latest update, Rekognition has become very efficient at handling all the different aspects of challenging images that have been captured in unconstrained environments, announces AWS. For more information, check out the official blog post. “We can sell dangerous surveillance systems to police or we can stand up for what’s right. We can’t do both,” says a protesting Amazon employee AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developers Amazon Rekognition can now ‘recognize’ faces in a crowd at real-time
Read more
  • 0
  • 0
  • 12154

article-image-intellij-idea-2018-3-is-out-with-support-for-java-12-accessibility-improvements-github-pull-requests-and-more
Savia Lobo
22 Nov 2018
7 min read
Save for later

IntelliJ IDEA 2018.3 is out with support for Java 12, accessibility improvements, GitHub pull requests, and more

Savia Lobo
22 Nov 2018
7 min read
Yesterday, the JetBrains community announced IntelliJ IDEA 2018.3, this year’s third major update. The IntelliJ IDEA 2018.3 is a massive new update that delivers Java 12 support, Multiline TODO comments, GitHub Pull Requests, Git submodules, Accessibility improvements, and more. Major updates in the IntelliJ IDEA 2018.3 Java Updates Support for Java 12: The 2018.3 version adds an initial support for the upcoming Java 12. Users can preview the Raw String Literals (JEP 326) in the IDE. Quickly spot duplicates: Users can now quickly spot duplicates in more complicated cases using this IDE. Java Stream API improvements: Redundant sorted calls made before the subsequent min call are now easily detected. New data-flow-based inspection: The new inspection known as ‘Condition is covered by further condition’ detects situations where the first condition is unnecessary as it’s covered by the second one. The new IDE provides a quick-fix to remove such redundant conditions. Detection of redundant usages of the @SuppressWarning annotation: The IDE now identifies situations where a suppressed inspection no longer addresses any warnings in the associated method body, class, or statement. Editor Updates Issues in code are highlighted: The IDE highlights the first and all subsequent TODO comment lines in the editor and displays them in the TODO tool window. New indentation status bar: A new indentation status bar displays the size of the indent in the current file. Improvements in the EditorConfig support: You can create a scope to disable code formatting from being done on specific files and folders. Go to the ‘Formatter Control’ tab in Preferences / Settings | Editor | Code Style. Syntax highlighting and code completion are now available for EditorConfig files. Version Control Updates Initial support for GitHub Pull Requests:  With this users can now view PRs in their IDE. Support for Git submodules: update your project, commit changes, view diffs, and resolve conflicts. New GitHub Pull Requests tool window: With this tool window, users can preview all the pull requests in your GitHub repository. Advanced navigation in VCS Log: Users can use the Forward and Back navigation actions while they are in the VCS Log after they navigate from the commit hashes to the commit in the VCS Log tab, or after they use the Go to hash/branch/tag action. Simply use the left and right arrow keys to navigate to the child or parent commit. Preview differences in the File History tab: Diff Preview is now available in the File History tab of the Version Control tool window. Kotlin Updates Kotlin 1.3 support: IntelliJ IDEA can help you migrate Kotlin project to Kotlin 1.3 and perform all the required changes in obsolete code to make it compliant with the latest state of the libraries. Enhancements in multiplatform project support: In Kotlin 1.3, the model of multiplatform projects has been completely reworked in order to improve expressiveness and flexibility, and to make sharing common code easier. IntelliJ IDEA provides a set of project examples that cover the most common use cases. New Kotlin Inspections & Quick-fixes: Since the release of IntelliJ IDEA 2018.2, the Kotlin plugin has got over 30 new inspections, quick-fixes, and intentions that helps write a code much more effectively. Spring & Spring Boot Updates Spring Boot 2.1 support: IntelliJ IDEA 2018.3 fully supports Spring Boot 2.1. Configuration values annotated with @DataSize are validated using the default @DataSizeUnit if specified. Spring Initializr improvements: While creating a project using Spring Initializr, the IDE will suggest installing or enabling appropriate plugins to ensure that support for all selected technologies is present. Better JPA and Spring Data support for Kotlin: Now the IDE can automatically inject JPQL into query strings, providing completion for entity names and parameters. Users can write Spring Data interfaces in Kotlin, and IntelliJ IDEA will understand the entities used. The IDE provides smart completion for method names and quick-fixes for parameters. Maven Updates Users can now easily delegate all their build and run actions to Maven. They simply have to go to Preferences (Settings) | Build, Execution, Deployment | Build Tools | Maven | Runner and select the new Delegate IDE build/run actions to maven option. JVM Debugger Updates Attach to Java processes that were started without a debug agent:  After attaching to a process, users will be able to view the current thread’s state and variable values, and use the memory view. If users want to attach the debugger to their local process, they can use the handy new Attach Debugger action, which is available in the Run Tool Window Async stack traces in remote JVMs: IntelliJ IDEA 2018.3 now supports async stack traces in remote JVMs. To start using the agent remotely: copy /lib/rt/debugger-agent.jar to the remote machine. add -javaagent:debugger-agent.jar to the remote JVM options. Action to remove all breakpoints: IntelliJ IDEA 2018.3 comes with handy new actions that remove all the breakpoints in a project, or all the breakpoints in the file. JavaScript & TypeScript Updates Improved Angular support: This includes much more accurate code completion and Go to definition for variables, pipes and async pipes, and template reference variables. Autoimports in JavaScript: IntelliJ IDEA can now automatically add imports for symbols from the project’s dependencies, in JavaScript files. This works as long as there’s a TypeScript definition file inside the package, or the package contains sources written as ES module. Support for Node.js worker threads: Users can now debug Node.js workers in IntelliJ IDEA. Users should be cautious of using Node.js 10.12 or above and the experimental-worker flag. The IDE provides code completion for the worker threads’ API. Improved flexibility with ESLint and TSLint: Users can override severity levels from the linter’s configuration file and see all problems from the linter as errors or warnings. Kubernetes Update Support for Helm resource files: The IDE now resolves the Helm resource template files and provides with editing support, which includes code completion, rename refactoring, and inspections and quick-fixes. Navigation in Helm resource files: The IDE lets users navigate from a value’s usage to its declaration in the chart’s values.yaml file. Helm template result preview: The IDE can now preview the result of the Helm template rendering in the diff. Helm dependency update: A new Helm Dependency Update action is available to download the external tgz dependencies (or update the existing ones) and display them in the project tree. Database Updates Added support for Cassandra database: With this release, the team has added support for the NoSQL database, Cassandra. Improvements in SQL code completion: Now code completion works for: non-aggregated fields in GROUP BY, all the columns are listed in SELECT, MERGE, and INSERT INTO table variable, named parameters of stored procedures, numeric fields in SUM() and AVG(), FILTER (WHERE) clause, and field types in SQLite. Introduction to table alias: One can now use the Introduce table alias action to create an alias directly on the table, and this alias will automatically replace usages of the table’s name. Single connection mode: In IntelliJ IDEA 2018.3 users can view any temporary objects in the database tree. Also, it’s possible to use the same transaction in different consoles. To know more about these and other updates in detail, visit the JetBrains blog. IntelliJ IDEA 2018.3 Early Access Program is now open! What’s new in IntelliJ IDEA 2018.2 How to set up the Scala Plugin in IntelliJ IDE [Tutorial]  
Read more
  • 0
  • 0
  • 10649

article-image-email-and-names-of-amazon-customers-exposed-due-to-technical-error-number-of-affected-users-unknown
Prasad Ramesh
22 Nov 2018
3 min read
Save for later

Email and names of Amazon customers exposed due to ‘technical error’; number of affected users unknown

Prasad Ramesh
22 Nov 2018
3 min read
Yesterday, some Amazon customers received an email stating that their names and email addresses have been revealed due to a ‘technical error’. There have been several reports of this on the internet. What is exposed? Amazon said that the users need not change their passwords. Only the emails and names of the Amazon customers have been exposed. As per the information shared by Amazon, passwords and payment information like credit cards seem to be unaffected. The worst that could happen is that your email will get a bunch of spam emails. The company did not reveal further information about the compromise. The number of affected users/email addresses and where this information is available is not known. Amazon told CNBC that the Amazon website and systems were not breached. In a statement, Amazon said; “We have fixed the issue and informed customers who may have been impacted.” The exact contents of the emails read: “Hello, We’re contacting you to let you know that our website inadvertently disclosed your name and email address due to a technical error. The issue has been fixed. This is not a result of anything you have done, and there is no need for you to change your password or take any other action. Sincerely, Customer Service http://Amazon.com” What are people saying A matter of surprise was that Amazon did not recommend changing the passwords of affected accounts. Also, the email signature had a capital A in the Amazon URL and had “http://” instead of “https://”. https://twitter.com/OfficialMisterC/status/1065227154961719296 https://twitter.com/briankrebs/status/1065219981833617408 Amazon customers are also concerned if the email originally was from Amazon due to the discrepancies in the email signature. Here are tweets displaying a chat with Amazon customer care. The responses from the Amazon customer care are also vague and they insist that the exposed information is not available publically. https://twitter.com/YaBoyKevinnn/status/1065325794740850688 https://twitter.com/notenoughnamez/status/1065231918713704449 Amazon sellers get customer information A comment on Hacker News reads: “If you were one of my customers I looked at your house, judged your grass, found you on LinkedIn and Facebook, Instagram, mortgages, mugshots, everything lol. The sellers also get your full name and address even on fulfilled by Amazon.” This comment might be an exaggeration or an over-enthusiastic seller. Other sellers do confirm that the names and addresses are seen but not the emails. The Amazon terms of service also prohibits the sellers from contacting the customers directly for any other purpose than the order. Another seller said that they get this to confirm the shipping address. This is where EU seems better off with a GDPR article that says companies need to inform users of data breaches. But even that gives an option which says “describe the nature of the personal data breach including where possible, the categories and approximate number of data subjects, approximate number of personal data records concerned,” So doesn't look like Amazon intends to disclose any further information about this incident and assures that there is no need to worry. This story appeared first on betanews after several Amazon customers reported it online. Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News Cathay Pacific, a major Hong Kong based airlines, suffer data breach affecting 9.4 million passengers
Read more
  • 0
  • 0
  • 10509

article-image-apple-has-quietly-acquired-privacy-minded-ai-startup-silk-labs-reports-information
Sugandha Lahoti
22 Nov 2018
2 min read
Save for later

Apple has quietly acquired privacy-minded AI startup Silk Labs, reports Information

Sugandha Lahoti
22 Nov 2018
2 min read
According to a report by Information, Apple has quietly acquired AI Startup Silk Labs earlier this year. The report of this acquisition has only come out recently. According to PitchBook, a research firm that tracks startup financing, the deal was likely to be a small one for Apple, as Silk Labs only had about a dozen employees and raised approximately $4 million in funding. Google, Amazon, and other companies have been using cloud-based servers for handling most AI processing for mobile devices. This causes user privacy issues as these companies could monitor users’ requests as they come in. Apple, on the other hand, has always been vocal about “selling smartphones and hardware and not user privacy”. What Apple has planned for Silk is unknown, though both companies have in the past expressed interest in building AI systems that operate locally instead of in the cloud. This may have been the reason for its acquisition of Silk Labs. Silk Labs is based in San Mateo, California. It was founded by former Mozilla CTO Andreas Gal and former Mozilla platform engineer Chris Jones along with Michael Vines, who served as Qualcomm Innovation Center's senior director of technology. Silk Labs mostly works in "video and audio intelligence," as well as use cases of edge computing ranging from home security to retail analytics and building surveillance. Silk Labs’s 2016 home monitoring camera called Sense was capable of detecting people, faces, objects, and audio signals. It could also play music based on the user's taste and pair with third-party gadgets like Sonos speakers and smart light bulbs. The distinguishing factor - unlike other AI-based smart home products - was that Sense processed computations on-device and stored data locally to ensure user privacy. However, the project never surfaced and was canceled. Apple may also release its own smart video cameras, following the acquisition. However, Apple will probably use Silk Lab’s tech to upgrade their underlying software and research to build on-device AI for Apple’s existing camera and mobile solutions Tim Cook talks about privacy, supports GDPR for USA at ICDPPC, ex-FB security chief calls him out. Tim Cook criticizes Google for their user privacy scandals but admits to taking billions from Google Search. Apple T2 security chip has Touch ID, Security Enclave, hardware to prevent microphone eavesdropping, amongst many other features!
Read more
  • 0
  • 0
  • 10055

article-image-neo4j-enterprise-edition-is-now-available-under-a-commercial-license
Amrata Joshi
21 Nov 2018
3 min read
Save for later

Neo4j Enterprise Edition is now available under a commercial license

Amrata Joshi
21 Nov 2018
3 min read
Last week, the Neo4j community announced that the Neo4j Enterprise Edition will be available under a commercial license. The source code is available only for the Neo4j Community Edition. The Neo4j Community Edition will continue to be provided under an open source GPLv3 license. According to the Neo4j community, this new change won’t affect any Neo4j open source projects. Also, it won’t create an impact over customers, partners or OEM users operating under a Neo4j subscription license. The Neo4j Desktop users using Neo4j Enterprise Edition under free development license also won’t get affected. It doesn’t impact the members of Neo4j Startup program. The reason for choosing an open core licensing model The idea behind getting Neo4j Enterprise Edition under commercial license was to clarify and simplify the licensing model and remove ambiguity. Also, the community wanted to clear the confusion regarding what they sell and what they open source. Also, the community wanted to clarify about options they offer. The Enterprise Edition source and object code were initially available under multiple licenses. This led to multiple interpretations of these multiple licenses which ultimately created confusion in the open source community, in the buyers, and even in legal reviewers’ minds. According to the Neo4j blog, “ >99% of Neo4j Enterprise Edition code was written by individuals on Neo4j’s payroll – employed or contracted by Neo4j-the-company. As for the fractional <1%... that code is still available in older versions. We’re not removing it. And we have reached out to the few who make up the fractional <1% to affirm their contributions are given proper due.” Developers can use the Enterprise Edition for free by using the Neo4j Desktop for desktop-based development. Startups can benefit through the startup license offered by Neo4j, which is also available now to the startups with up to 20 employees. Data journalists, such as the ICIJ and NBC News can use the Enterprise Edition for free via the Data Journalism Accelerator Program. Neo4j also offers a free license to universities for teaching and learning. To know more about this news, check out Neo4j’s blog. Neo4j rewarded with $80M Series E, plans to expand company Why Neo4j is the most popular graph database Neo4j 3.4 aims to make connected data even more accessible
Read more
  • 0
  • 0
  • 21015
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-red-hat-announces-full-support-for-clang-llvm-go-and-rust
Prasad Ramesh
21 Nov 2018
2 min read
Save for later

Red Hat announces full support for Clang/LLVM, Go, and Rust

Prasad Ramesh
21 Nov 2018
2 min read
Yesterday, Bob Davis, Senior Product Manager at Red Hat announced that Clang/LLVM, Go, and Rust will now enter “Full Support Phase”. The support lifecycle is changed after the General Availability (GA) of Clang/LLVM 6.0, Go 1.10, and Rust 1.29. Previously, these languages and tools were in “Technology Preview” status. They were provided for users to test their functionality and provide feedback. This was during the development process and there was no full supported under the Red Hat Subscription Level Agreements. They were not guaranteed to be functionally complete and were not intended for live production use. GA means that these products have now officially entered a phase to receive full support. Their website states that: “During the Full Support Phase, qualified Critical and Important Security errata advisories (RHSAs) and Urgent and Selected High Priority Bug Fix errata advisories (RHBAs) may be released as they become available. Other errata advisories may be delivered as appropriate.” On availability, support for new hardware and some enhanced software functionality may also be provided at the sole discretion of Red Hat. These are generally in minor releases. The minor releases will focus only on resolving defects/bugs. New installation images of the minor releases will be provided during this full support phase. As these packages are evolving fast, the support lifecycle will also have short intervals. This means that there will be quarterly updates to Rust, and updates every 6 months to LLVM and Golang. The support for them will be different than the usual long-term support LTS approach. For LLVM, Rust, and Go only the most recent build will be maintained. If an older version has a bug, the most recent build will be updated to fix it. If a bug is present in the current build, it will be addressed in the next scheduled build. That will be the next schedules minor release. For more details, visit the Red Hat Blog. The LLVM project is ditching SVN for GitHub. The migration to Github has begun. Golang just celebrated its ninth anniversary Rust 1.30 releases with procedural macros and improvements to the module system
Read more
  • 0
  • 0
  • 11511

article-image-kotlin-based-framework-ktor-1-0-released-with-features-like-sessions-metrics-call-logging-and-more
Amrata Joshi
21 Nov 2018
3 min read
Save for later

Kotlin based framework, Ktor 1.0, released with features like sessions, metrics, call logging and more

Amrata Joshi
21 Nov 2018
3 min read
In October, Kotlin’s framework Ktor 1.0 beta was released. It delivered high performance and an idiomatic API. Yesterday Ktor 1.0 was released as the first major version of the Kotlin-based framework. Ktor, a JetBrains project, builds asynchronous servers and clients in connected systems coroutines and also delivers good runtime performance. Ktor version 1.0 includes few essential features like sessions, authentication, JSON serialization, popular template engines, Web sockets, metrics, and many others. Ktor 1.0 feature highlights Ktor 1.0 has two main parts, namely, HTTP server framework and a multiplatform HTTP client. HTTP server framework The HTTP server framework built on Netty, Jetty and Java servlets runs on the JVM. Netty and Jetty, the lightweight engines make the processing faster as it helps in receiving connections within a second. It is container-friendly and can easily be embedded into desktop/Android applications. It could be run in an application server, for example,Tomcat. Multiplatform HTTP client Multiplatform HTTP client is asynchronous and is built using coroutines and IO primitives which are responsible for driving the server. It is implemented as a multiplatform library. The client is used for building asynchronous microservice architectures and for connecting all the backend functionalities into asynchronous pipelines. The multiplatform HTTP client makes it easy to retrieve data without blocking application execution on mobile devices and web pages uniformly. It supports JVM, JS, Android and iOS. Features Ktor’s built-in support for serving static content is useful for serving style sheets, scripts, images, etc. Ktor provides a mechanism for constructing URLs and reading the parameters to create routes in a typed way. Ktor’s Metrics feature helps in configuring the Metrics to get useful information about the server and the requests. It also has a mechanism called session useful for persisting data between different HTTP requests. Session also helps servers to keep a piece of information associated with the client during a sequence of HTTP requests and responses. Ktor’s Compression feature is used for compressing outgoing content using gzip, deflate or custom encoder. This helps in reducing the size of the response. Ktor provides Call Logging feature which is used for logging client requests. Ktor 1.0 introduced WebSockets mechanism to keep a bi-directional, real-time and ordered connection between the server and the client. Major improvements Ktor 1.0 comes with improved performance and documentation It uses Kotlin 1.3.10 Ktor 1.0 has fixed client response cancelation via receive<Unit>() and response.cancel() In Ktor 1.0 there are improvements to test client and mock engine The DevelopmentEngine has been renamed to EngineMain There is an improved serialization client feature Bug fixes Fixes to Cookies dates, domains, and dupicate parameters processing. Websocket session lifecycle has been fixed in Ktor 1.0 Timeouts in WebSockets have been fixed with jetty To know more about this news, check out the announcement on Jetbrains blog. Kotlin 1.3 released with stable coroutines, multiplatform projects and more How to avoid NullPointerExceptions in Kotlin [Video] Implementing Concurrency with Kotlin [Tutorial]
Read more
  • 0
  • 0
  • 12003

article-image-symfony-leaves-php-fig-the-framework-interoperability-group
Amrata Joshi
21 Nov 2018
2 min read
Save for later

Symfony leaves PHP-FIG, the framework interoperability group

Amrata Joshi
21 Nov 2018
2 min read
Yesterday, Symfony, a community of 600,000 developers from more than 120 countries, announced that it will no longer be a member of the PHP-FIG, a framework interoperability group. Prior to Symfony, the other major members to leave this group include, Laravel, Propel, Guzzle, and Doctrine. The main goal of the PHP-FIG group is to work together and maintain interoperability, discuss commonalities between projects and work together to make them better. Why Symfony is leaving PHP-FIG PHP-FIG has been working on various PSRs (PHP Standard Recommendations). Kévin Dunglas, a core team member at Symfony, said, “It looks like it's not the goal anymore, 'cause most (but not all) new PSRs are things no major frameworks ask for, and that they can't implement without breaking their whole ecosystem.” https://twitter.com/fabpot/status/1064946913596895232 The fact that the major contributors left the group could possibly be a major reason for Symfony to quit. But it seems many are disappointed by this move of Symfony as they aren’t much satisfied by the reason given. https://twitter.com/mickael_andrieu/status/1065001101160792064 The matter of concern for Symfony was that the major projects were not getting implemented as a combined effort. https://twitter.com/dunglas/status/1065004250005204998 https://twitter.com/dunglas/status/1065002600402247680 Something similar happened while working towards PSR 7, where no commonalities between the projects were given importance. Instead, it was considered as a new separate framework. https://twitter.com/dunglas/status/1065007290217058304 https://twitter.com/titouangalopin/status/1064968608646864897 People are still arguing over why Symfony quit. https://twitter.com/gmponos/status/1064985428300914688 Will the PSRs die? With the latest move by Symfony, there are various questions raised towards the next step the company might take. Will the company still support PSRs or is it the end for the PSRs? Kévin Dunglas has answered to this question in one of his tweets, where he said, “Regarding PSRs, I think we'll implement them if relevant (such as PSR-11) but not the ones not in the spirit of a broad interop (as PSR-7/14).” To know more about this news, check out Fabien Potencier’s Twitter thread Perform CRUD operations on MongoDB with PHP Introduction to Functional Programming in PHP Building a Web Application with PHP and MariaDB – Introduction to caching
Read more
  • 0
  • 0
  • 30207

article-image-twitter-ceo-jack-dorsey-slammed-by-users-after-a-photo-of-him-holding-smash-brahminical-patriarchy-poster-went-viral
Natasha Mathur
21 Nov 2018
5 min read
Save for later

Twitter CEO, Jack Dorsey slammed by users after a photo of him holding 'smash Brahminical patriarchy' poster went viral

Natasha Mathur
21 Nov 2018
5 min read
Twitter CEO, Jack Dorsey, stirred up a social media hurricane after a picture of him holding a poster of a woman that said “Smash brahminical patriarchy” went viral. The picture which was first shared by Anna MM Vetticad on Twitter, an award-winning Indian journalist, and author, was later retweeted by Twitter India. https://twitter.com/annavetticad/status/1064084446909997056 Twitter India shared the picture mentioning that it was of a “closed-door discussion” with a group of women journalists and change makers from India. It also mentioned that “It is not a statement from Twitter or our CEO, but a tangible reflection of our company's efforts to see, hear and understand all sides of important public conversations that happen on our service around the world”. https://twitter.com/TwitterIndia/status/1064523207800119296 Soon after the picture was shared, it started to receive heavy backlash from Brahmin nationalists and users over Dorsey slamming Brahmins, members of the highest caste in Hinduism. Mohandas Pai (former Infosys CFO), Rajeev Malhotra (Indian-American author),  and Chitra Subramaniam (Indian journalist and author) are some of the prominent names who have spoken out against the Twitter Chief: https://twitter.com/RajivMessage/status/1064500259714408454 https://twitter.com/chitraSD/status/1064550413599473664 https://twitter.com/TVMohandasPai/status/1064554153626734592 https://twitter.com/neha_ABVP/status/1064936591263592448 In fact, Sandeep Mittal, Joint Secretary, Parliament of India, went ahead to call the picture a “fit case for registration of a criminal case for attempt to destabilize the nation”. https://twitter.com/smittal_ips/status/1064762920016494592 This was Dorsey’s first tour to India, one of Twitter’s fastest growing markets. During his tour, he had already conducted a discussion on Twitter with the students at IIT Delhi, met Dalai Lama, actor Shahrukh Khan, and the Prime Minister of India, Narendra Modi. It was last weekend when the picture was taken during Dorsey’s meet up with a group of journalists, writers, and activists in Delhi to hear about their experiences on Twitter in India. Vijaya Gadde, Twitter’s legal, policy, and trust and safety lead Vijaya, had accompanied Mr. Dorsey to India, and apologized over Twitter, saying that the poster was a “private gift” given to Twitter. No apology has been made by Dorsey so far. https://twitter.com/vijaya/status/1064586313863618560 People stand up in defense of the picture The apology by Vijaya Gadde further sparked anger among the female journalists who were a part of the round-table discussion and a lot of others users. Anna MM Vetticad, who was a part of the picture, tweeted against Gadde’s apology, saying that she’s “sad to see a lack of awareness and concern about the caste issues” and that the picture was not a “private photo”. Vetticad also mentioned that the photo was taken by a Twitter representative and sent for distribution. https://twitter.com/annavetticad/status/1064782090905010176 Another journalist, Rituparna Chatterjee, who was also present during the discussion, tweeted in defense of the picture saying that the posters were brought and given to the Twitter team by Sanghapali Aruna, who raised some important points regarding her Dalit experience on Twitter. She also mentioned that there was no “targeting/specific selection of any group”. https://twitter.com/MasalaBai/status/1064474340392148992 https://twitter.com/MasalaBai/status/1064474709952212998 Sanghapali Aruna, who brought the posters with her talked to ThePrint about how women have been one of the major victims of the Brahminical patriarchy despite it controlling all of us in more ways than one. “The ‘Smash Brahminical patriarchy’ poster which I gifted to Jack Dorsey was questioning precisely this hegemony and concentration of power in the hands of one community. This wasn’t an attempt at hate speech against the Brahmins but was an attempt to challenge the dominance and sense of superiority that finds its origins in the caste system”. Aruna was also greatly disturbed by Gadde’s apology as she mentions that  “Americans do not know of the Indian caste history, they can’t tell one brown person from another. But as an Asian woman, Vijaya should’ve known better”. Public reaction to the photo largely varies. Some people slammed Dorsey and the photo, while others have stood up in support of it. They believe that the poster was a political art piece that represented India’s Dalit lower caste and other religious minorities’ demands to get rid of the gender and caste-based discrimination by the elite Brahmins. https://twitter.com/dalitdiva/status/1064767431061708800 https://twitter.com/GauravPandhi/status/1064790294321905664 https://twitter.com/sandygrains/status/1064753374313144320 https://twitter.com/mihirssharma/status/1064756234702725120 https://twitter.com/AdityaRajKaul/status/1064975045443940352 https://twitter.com/GitaSKapoor/status/1065121168393478145 https://twitter.com/DrSharmisthaDe/status/1064942023940218880 Twitter’s CEO, Jack Dorsey’s Senate Testimony: On Twitter algorithms, platform health, role in elections and more Jack Dorsey to testify explaining Twitter algorithms before the House Energy and Commerce Committee Twitter’s trying to shed its skin to combat fake news and data scandals, says Jack Dorsey
Read more
  • 0
  • 0
  • 10424
article-image-outage-plagues-facebook-instagram-and-whatsapp-ahead-of-black-friday-sale-throwing-users-and-businesses-into-panic
Melisha Dsouza
21 Nov 2018
3 min read
Save for later

Outage plagues Facebook, Instagram and Whatsapp ahead of Black Friday Sale, throwing users and businesses into panic

Melisha Dsouza
21 Nov 2018
3 min read
Yesterday, (on 20th November) Facebook and its subsidiary services- Instagram and Whatsapp- went down for users around UK, Europe, and the US. This is the second time Facebook faced an outage this month.  According to Facebook's site for developers, the outage started around 6 a.m. Eastern time. The outage lasted for 13 hours. 48% Facebook users reported a total blackout, while 35% faced issue with login, and 16% users had issues viewing pictures. In the case of Instagram, 53% of users had problems with their newsfeed, while 33% had issues with login, and 13% with the website. Facebook responded to this outage on Twitter, assuring users that they were working towards resolving the issue. https://twitter.com/facebook/status/1064905103755247621 Facebook's Ads Manager, the tool that lets users create advertisements on its social network, also crashed, just days before businesses use Facebook and Instagram to promote Black Friday sales. This left would-be advertisers unable to create new ad campaigns. One of Facebook’s representatives confirmed the outage to Bloomberg, and stated that people and companies can't create new ads or change their existing campaigns due to the issue. The representative added that advertisements previously launched through the system are still running on Facebook. Many advertisement creators that use Facebook to publish ads were in a confused state once this news was out. https://twitter.com/KayaWhatley/status/1064934308220149760 https://twitter.com/stevekatasi/status/1064977427405901825 Once the service was up and running, Facebook tweeted: https://twitter.com/facebook/status/1065036077486944256 In spite of the official announcement that ‘Facebook was 100 percent up for everyone’, many users complained that the site was not in a completely functional state. Amongst persisting issuers were:  pictures not loading, gifs not loading all the way, broken links in posts, messenger app not functioning properly, pages not loading, accounts not been restored and many more issues. Facebook has not yet reverted back to these complaints. Talking of Facebook issues, Zuckerberg is still on the defensive a report in The New York Times that threw the company under scrutiny for its security breaches from Russian accounts ahead of the 2016 U.S. presidential election and how it deals with controversies it faces.  In an exclusive interview with CNN yesterday, Zuckerberg confirmed that Sheryl Sandberg will continue working for the company and that he won’t be stepping down as chairman. GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage Why skepticism is important in computer security: Watch James Mickens at USENIX 2018 argue for thinking over blindly shipping code Facebook GEneral Matrix Multiplication (FBGEMM), high performance kernel library, open sourced, to run deep learning models efficiently  
Read more
  • 0
  • 0
  • 16699

article-image-autodesk-acquires-plangrid-for-875-million-to-digitize-and-automate-construction-workflows
Savia Lobo
21 Nov 2018
3 min read
Save for later

Autodesk acquires PlanGrid for $875 million, to digitize and automate construction workflows

Savia Lobo
21 Nov 2018
3 min read
Yesterday, Autodesk, a software corporation for the architecture, engineering, construction, and manufacturing, announced that it has acquired the leading provider of construction productivity software, PlanGrid for $875 million net of cash. The transaction is expected to close during Autodesk's fourth quarter of fiscal 2019, which is, ending January 31, 2019. With this acquisition of the San Francisco based startup, Autodesk will be able to offer more comprehensive, cloud-based construction platform. PlanGrid software, launched in 2011, gives builders real-time access to project plans, punch lists, project tasks, progress photos, daily field reports, submittals and more. Autodesk’s CEO, Andrew Anagnost, said, “There is a huge opportunity to streamline all aspects of construction through digitization and automation. The acquisition of PlanGrid will accelerate our efforts to improve construction workflows for every stakeholder in the construction process.” According to TechCrunch, “The company, which is a 2012 graduate of Y Combinator, raised just $69 million, so this appears to be a healthy exit for them.” In an interview with CEO and co-founder Tracy Young in 2015 at TechCrunch Disrupt in San Francisco, she had said, “the industry was ripe for change. The heart of construction is just a lot of construction blueprints information. It’s all tracked on paper right now and they’re constantly, constantly changing”. When Tracy started the idea in 2011, her idea was to move all that paper to the cloud and display it on an iPad. According to Tracy, “At PlanGrid, we have a relentless focus on empowering construction workers to build as productively as possible. One of the first steps to improving construction productivity is the adoption of digital workflows with centralized data. PlanGrid has excelled at building beautiful, simple field collaboration software, while Autodesk has focused on connecting design to construction. Together, we can drive greater productivity and predictability on the job site.” Jim Lynch, Construction General Manager at Autodesk, said, "We'll integrate workflows between PlanGrid's software and both Autodesk Revit software and the Autodesk BIM 360 construction management platform, for a seamless exchange of information between all project members." Autodesk and PlanGrid have developed complementary construction integration ecosystems using which customers can connect other software applications. The acquisition is expected to expand the integration partner ecosystem, giving customers a customizable platform to test and scale new ways of working. To know more about this news in detail, visit Autodesk’s official press release. IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever Could Apple’s latest acquisition yesterday of an AR lens maker signal its big plans for its secret Apple car? Plotly releases Dash DAQ: a UI component library for data acquisition in Python
Read more
  • 0
  • 0
  • 12306

article-image-mozilla-introduces-lpcnet-a-dsp-and-deep-learning-powered-speech-synthesizer-for-lower-power-devices
Bhagyashree R
21 Nov 2018
2 min read
Save for later

Mozilla introduces LPCNet: A DSP and deep learning-powered speech synthesizer for lower-power devices

Bhagyashree R
21 Nov 2018
2 min read
Yesterday, Mozilla’s Emerging Technologies group introduced a new project called LPCNet, which is a WaveRNN variant. LPCNet aims to improve the efficiency of speech synthesis by combining deep learning and digital signal processing (DSP) techniques. It can be used for Text-to-Speech (TTS), speech compression, time stretching, noise suppression, codec post-filtering, and packet loss concealment. Why is LPCNet introduced? Many recent neural speech synthesis algorithms have made it possible to synthesize high-quality speech and code high-quality speech at very low bitrate. These algorithms, which are often based on algorithms like WaveNet, give promising results in real-time with a high-end GPU. But LPCNet aims to perform speech synthesis on end-user devices like mobile phones, which generally do not have powerful GPUs and have a very limited battery capacity. We do have some low complexity parametric synthesis models such as low bitrate vocoders, but their quality is a concern. Generally, they are efficient at modeling the spectral envelope of the speech using linear prediction, but no such simple model exists for the excitation. LPCNet aims to show that the efficiency of speaker-independent speech synthesis can be improved by combining newer neural synthesis techniques with linear prediction. What mechanisms does LPCNet use? In addition to linear prediction, it includes the following tricks: Pre-emphasis/de-emphasis filters: These filters allow shaping the noise caused by the μ-law quantization. LPCNet is capable of shaping the μ-law quantization noise to be mostly inaudible. Sparse matrices: LPCNet uses sparse matrices in the main RNN similar to WaveRNN. These block-sparse matrices consist of blocks with size 16x1 to make it easier to vectorize the products. Instead of forcing many non-zero blocks along the diagonal, as a minor improvement, all the weights on the diagonal of the matrices are kept. Input embedding: Instead of feeding the inputs directly to the network, the developers have used an embedding matrix. Embedding is generally used in natural language processing, but using it for μ-law values makes it possible to learn non-linear functions of the input. You can read more in detail about LPCNet on Mozilla’s official website. Mozilla v. FCC: Mozilla challenges FCC’s elimination of net neutrality protection rules Mozilla shares why Firefox 63 supports Web Components Mozilla shares how AV1, the new open source royalty-free video codec, works
Read more
  • 0
  • 0
  • 10046
article-image-django-is-revamping-its-governance-model-plans-to-dissolve-django-core-team
Bhagyashree R
21 Nov 2018
4 min read
Save for later

Django is revamping its governance model, plans to dissolve Django Core team

Bhagyashree R
21 Nov 2018
4 min read
Yesterday, James Bennett, a software developer and an active contributor to the Django web framework issued the summary of a proposal on dissolving the Django Core team and revoking commit bits. Re-forming or reorganizing the Django core team has been a topic of discussion from the last couple of years, and this proposal aims to take this discussion to real action. What are the reasons behind the proposal of dissolving the Django Core team? Unable to bring in new contributors Django, the open source project has been facing some difficulty in recruiting and retaining contributors to keep the project alive. Typically, open source projects avoid this situation by having corporate sponsorship of contributions. Companies which rely on the software also have employees who are responsible to maintain it. This was true in the case of Django as well but it hasn’t really worked out as a long-term plan. As compared to the growth of this web framework, it has hardly been able to draw contributors from across its entire user base. The project has not been able to bring new committers at a sufficient rate to replace those who have become less active or even completely inactive. This essentially means that Django is dependent on the goodwill of the contributors who mostly don’t get paid to work on it and are very few in number. This poses a risk on the future of the Django web framework. Django Committer is seen as a high-prestige title Currently, the decisions are made by consensus, involving input from committers and non-committers on the django-developers list and the commits to the main Django repository are made by the Django Fellows. Even people who have commit bits of their own, and therefore have the right to just push their changes straight into Django, typically use pull requests and start a discussion. The actual governance rarely relies on the committers, but still, Django committer is seen as a high-prestige title, and committers are given a lot of respect by the wider community. This creates an impression among potential contributors that they’re not “good enough” to match up to those “awe-inspiring titanic beings”. What is this proposal about? Given the reasons above, this proposal is being made to dissolve the Django core team and also revoke the commit bits. Instead, this proposal will introduce two roles called Mergers and Releasers. Mergers would merge pull requests into Django and Releasers would package/publish releases. Rather than being all-powered decision-makers, these would be bureaucratic roles. The current set of Fellows will act as the initial set of Mergers, and something similar will happen for Releasers. As opposed to allowing the committers making decisions, governance would take place entirely in public, on the django-developers mailing list. But as a final tie-breaker, the technical board would be retained and would get some extra decision-making power. These powers will be mostly related to the selection of the Merger/Releaser roles and confirming that new versions of Django are ready for release. The technical board will be elected very less often than it currently is and the voting would also be open to public. The Django Software Foundation (DSF) will act as a neutral  administrator of the technical board elections. What are the goals this proposal aims to achieve? Mr. Bennett believes that eliminating the distinction between the committers and the “ordinary contributors” will open doors for more contributors: “Removing the distinction between godlike “committers” and plebeian ordinary contributors will, I hope, help to make the project feel more open to contributions from anyone, especially by making the act of committing code to Django into a bureaucratic task, and making all voices equal on the django-developers mailing list.” The technical board remains as a backstop for resolving dead-locked decisions. This proposal will provide additional authority to the board such as issuing the final go-ahead on releases. Retaining the technical board will ensure that Django is not going to descend into some sort of “chaotic mob rule”. Also, with this proposal the formal description of Django’s governance becomes much more in line with the reality of how the project actually works and has worked for the past several years. To know more in detail, read the post by James Bannett: Django Core no more. Django 2.1.2 fixes major security flaw that reveals password hash to “view only” admin users Django 2.1 released with new model view permission and more Getting started with Django and Django REST frameworks to build a RESTful app
Read more
  • 0
  • 0
  • 13873

article-image-buzzfeed-report-googles-sexual-misconduct-policy-does-not-apply-retroactively-to-claims-already-compelled-to-arbitration
Melisha Dsouza
21 Nov 2018
4 min read
Save for later

BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration”

Melisha Dsouza
21 Nov 2018
4 min read
On 8th November, Google updated its sexual misconduct policy in response to the #GoogleWalkout that protested the discrimination, racism, and sexual harassment that employees encountered at Google’s workplace. Following this, Sundar Pichai announced a ‘comprehensive’ plan to incorporate transparency into how employees raise concerns and how Google will handle them. One of the main points publicized to a great extent was that Google eliminated forced arbitration in cases of sexual harassment. Now, fresh reports have emerged that Richard Hoyer, a lawyer for Loretta Lee, the ex-Google software engineer who filed a sexual harassment, gender discrimination, and wrongful termination suit against her former employer earlier this year, told BuzzFeed News that a Google attorney stated the policy change would not apply to her ongoing case. Google lawyer Brian Johnsrud sent an email (reviewed by Buzzfeed) to Hoyer last Friday which stated, “Google announced a prospective policy change which applies going forward to individual sex harassment as well as sex assault claims. This kind of policy change does not apply retroactively to claims already compelled to arbitration.” In response to this claim, Hoyer told Buzzfeed that “Google says, ‘We will not force employees into arbitration,’ and [the] response to me taking them up on that is, ‘Oh, we didn’t say when.’ It was a shock to see Google renege on the announcement that [the company] went through a lot of effort to publicize.” Loretta Lee’s suit, filed in February 2018, alleges she was “ routinely sexually harassed”  without any intervention on Google’s part to stop the harassment. The suit further states that male coworkers spiked her drinks with alcohol, one male co-worker messaged her to ask for a “horizontal hug”; and that at a holiday party, a drunk male coworker slapped her. According to the lawsuit, she found a male coworker hiding under her desk and “believed he may have installed some type of camera or similar device under her desk”. Post the complaint filed in the HR department of Google, the lawsuit states that Lee’s male co-workers retaliated against her. They refused to approve her code and stalled her projects, thus making it more difficult for her to succeed at work. In September 2018, the Lee case was compelled into arbitration. Hoyer, told BuzzFeed News that there still would have been time to appeal the court’s order compelling arbitration. On the day of Google’s announcement to end forced arbitration, Hoyer contacted Google's lawyer to say Lee elected not to arbitrate her claims. According to emails reviewed by BuzzFeed News, Google’s lawyer delayed the reply for a week and finally stated that the company would still force arbitration for her case. Apparently, the reply was received on the very last day that Lee could have appealed. According to Buzzfeed, the email exchange between Hoyer and Johnsrud showed that in October, Lee sought to negotiate a monetary settlement with Google. The reply received from Johnsrud was not befitting at all:  “I was surprised to receive your voicemail making a settlement demand after Ms. Lee has tried to trash Google in the press and avoid arbitration.” He further added that unless Lee substantially reduced her settlement demand, Google would prefer to proceed with arbitration. Private arbitration often shields a firm from workers airing their grievances in open court, and also results in lower-cost settlements between the oppressed worker and the employer. In mandated arbitration, there are no rules of evidence and there is no public access to what happens as compared to a traditional court settling. Many cases of mandated arbitration have not lead to a fair settlement between the employer and employee. Google’s roll back on its policy statement in Lee’s case looks like a massive discouragement for workers suffering from sexual harassment at other workplaces. Head over to BuzzFeed for an entire coverage of this news. 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Google now requires you to enable JavaScript to sign-in as part of its enhanced security features
Read more
  • 0
  • 0
  • 13471
Modal Close icon
Modal Close icon