News Archives - Enterprise Viewpoint https://enterpriseviewpoint.com/category/news/ Vistas Beyond the Vision Fri, 24 Nov 2023 16:24:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 https://enterpriseviewpoint.com/wp-content/uploads/2017/01/Enterprise-ViewpointEVlogo-1-150x150.png News Archives - Enterprise Viewpoint https://enterpriseviewpoint.com/category/news/ 32 32 A Shot for Autonomous Vehicles to Become an Integral Piece of the Public Mobility Network https://enterpriseviewpoint.com/a-shot-for-autonomous-vehicles-to-become-an-integral-piece-of-the-public-mobility-network/ Fri, 24 Nov 2023 16:24:54 +0000 https://enterpriseviewpoint.com/?p=15541 Human beings have proven themselves to be good at gazillion different things, but nothing beats their ability to get better on a consistent basis. This unwavering commitment towards growth has really enabled the world to clock some huge milestones, with technology emerging as quite a major member of the group. The reason why we hold […]

The post A Shot for Autonomous Vehicles to Become an Integral Piece of the Public Mobility Network appeared first on Enterprise Viewpoint.

]]>
Human beings have proven themselves to be good at gazillion different things, but nothing beats their ability to get better on a consistent basis. This unwavering commitment towards growth has really enabled the world to clock some huge milestones, with technology emerging as quite a major member of the group. The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which ushered us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become abundantly clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as a result, initiated a full-blown tech revolution. Of course, the next thing this revolution did was to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming one new discovery ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.

Beep Inc., provider of autonomous shared mobility solutions, has officially announced the launch of Beep AutonomOS, which is a platform designed to let public transit operators and mobility-as-a-service companies integrate autonomous mobility services rapidly and seamlessly into their solutions. You see, software-driven mobility gets dubbed as the next big thing in the logistics industry because of the way it leverages autonomous mobility networks to combine real-time service optimization with greater efficiency and performance. But what makes AutonomOS an ideal candidate to fulfill this growing market? Well, the answer resides in its ability to provide safe, scalable, cost-effective multi-passenger autonomous mobility services. Making the whole solution more important is a fact that it provides comprehensive services capability for the deployment and management of autonomous passenger services either as a standalone solution or with integration into multimodal operations. Talk about the whole value proposition on a more granular level, the solution in question comes decked up with a unified view of service performance, fleet health. and on-road operations. Then, we have dedicated government tools to ensure mission compliance and passenger safety. Once the safety and legality concerns are duly addressed, AutonomOS brings to the fore service optimization features that integrate service performance, smart city infrastructure, and ridership data to drive greater service efficiency, optimize passenger experience, and maximize ridership across the system. Moving on, the product also has on offer service definition and planning functions to support a variety of service modes from fixed route to demand-responsive variants, while simultaneously boasting a machine learning-powered in-cabin monitoring functionality. The latter element is there to facilitate an immediate response in the event of a passenger safety or roadway issue. Another thing which enhances the prospects of this solution talks to its support for leading ADS providers, a sense of support that is further backed up by data protocol and a toolkit to enable rapid integration with further platforms. In case the whole offering still doesn’t sound attractive enough to you, then it must be mentioned how AutonomOS is even made to be outright compatible in the context of wider data standards including GTFS (General Transit Feed Specification), and GTFS-RT (General Transit Feed Specification Realtime).

“Autonomous vehicles are capable of safely navigating our streets from waypoint to waypoint, but lack the concepts of mission, service and passenger,” said Joe Moye, CEO of Beep. “Beep AutonomOS fills a void in the autonomy landscape by introducing management and orchestration logic enabling the integration of autonomous vehicles into public mobility networks. More importantly, AutonomOS adds an additional layer of functionality to address passenger safety and comfort concerns in advance of fully unattended autonomous deployments.”

Founded in 2019, Beep’s rise to prominence stems from an ability to plan, deploy, and manage autonomous shuttles in dynamic mobility networks. By doing so, the company connects people and places with solutions that reduce congestion, eliminate carbon emissions, improve road safety and enable mobility for all.

The post A Shot for Autonomous Vehicles to Become an Integral Piece of the Public Mobility Network appeared first on Enterprise Viewpoint.

]]>
Blazing Past a Major AI Bottleneck https://enterpriseviewpoint.com/blazing-past-a-major-ai-bottleneck/ Wed, 22 Nov 2023 16:15:30 +0000 https://enterpriseviewpoint.com/?p=15538 There is more to human life than anyone can ever imagine, and yet the thing which stands out the most is our ability to grow at a consistent clip. We say this because the stated ability has already fetched the world some huge milestones, with technology emerging as quite a major member of the group.  […]

The post Blazing Past a Major AI Bottleneck appeared first on Enterprise Viewpoint.

]]>
There is more to human life than anyone can ever imagine, and yet the thing which stands out the most is our ability to grow at a consistent clip. We say this because the stated ability has already fetched the world some huge milestones, with technology emerging as quite a major member of the group.  The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as a result, initiated a full-blown tech revolution. Of course, the next thing this revolution did was to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming one new discovery ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.

The researching teams at Massachusetts Institute of Technology and MIT-IBM Watson Lab have successfully developed a technique which can empower deep-learning models to adapt to new sensor data, and more importantly, do so directly on an edge device. Before we unpack the whole development, we must try and gain an idea about the problem statement here. Basically, deep-learning models that enable artificial intelligence chatbots, in order to deliver the customization expected from them, need constant fine-tuning with fresh data. Now, given how smartphones and other edge devices lack the memory and computational power required for such a fine-tuning process, the current framework tries to navigate through that by uploading user data on cloud servers where the model is updated. So, what’s the problem here? Well, the problem talks to the data transmission process exhausting huge amounts of energy. Not just energy, there is also security risks involved, as you are sending  sensitive user data to a cloud server which always carry a risk of getting compromised. Having covered the problem, we should now get into how exactly the new technique takes on it. Named PockEngine, the new solution comes decked up with the means to determine what parts of a huge machine-learning model need alterations to improve accuracy. Complimenting the same is a fact that it only stores and computes with those specific pieces, thus leaving the rest undisturbed and safe. This marks a major shift, because up until now, whenever we would run an AI model, it instigated inference, a process where data input is passed from layer to layer till the time a prediction is generated. Hold on, the main issue presents itself after the said process is done. You see, during training and fine-tuning, the model undergoes a phase known as backpropagation. Backpropagation, in case weren’t aware, involves comparing the output to the correct answer. Next up, it runs the model in reverse, and each layer is updated as the model’s output gets closer to the correct answer. With each layer required to be duly updated, the entire model and intermediate results have to be unquestionably stored, making the fine-tuning mechanism pretty high maintenance. There is, fortunately enough, a loophole which suggests that not all layers in the neural network are important for improving accuracy, and even for layers that are important, the entire layer may not need to be updated. Hence, the surplus components don’t need to be stored. Furthermore, you also don’t have to revisit the very first layer to improve accuracy because the process can be stopped somewhere in the middle. Understanding these loopholes, PockEngine first fine-tunes each layer, one at a time, on a certain task, and then measures the accuracy improvement after each individual layer. Such a methodology can go a long way when it comes to identifying the contribution of each layer, as well as trade-offs between accuracy and fine-tuning cost, while automatically determining the percentage of each layer that needs to be fine-tuned.

“On-device fine-tuning can enable better privacy, lower costs, customization ability, and also lifelong learning, but it is not easy. Everything has to happen with a limited number of resources. We want to be able to run not only inference but also training on an edge device. With PockEngine, now we can,” said Song Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT-IBM Watson AI Lab, a distinguished scientist at NVIDIA, and senior author of an open-access paper describing PockEngine.

Another manner in which the solution sets itself apart is concerned with the timing aspect. To put it simply, the traditional backpropagation graph is generated during runtime, meaning it demands a massive load of computation. On the other hand, PockEngine does the same during compile time, just as the model is being prepared for deployment. It essentially deletes bits of code to remove unnecessary layers or pieces of layers and create a pared-down graph of the model to be used during runtime. Then, the solution performs other optimizations on this graph to further improve efficiency. Turning this feature all the more important is a fact that the entire process needs to be conducted only once.

The researchers have already performed some initial tests on their latest brainchild. The stated tests saw them applying PockEngine to deep-learning models on different edge devices, including Apple M1 Chips, and the digital signal processors common in many smartphones and Raspberry Pi computers. Going by the available details, the solution performed on-device training up to 15 times faster, and that it did without witnessing any drop in accuracy. Apart from that, it also made a big cut back on the amount of memory required for fine-tuning. Once this bit was done, they then moved on to applying the solution across large language model Llama-V2. Here, the observations revealed that PockEngine was able to reduce the each iteration’s timeframe from seven seconds to less than one second.

“This work addresses growing efficiency challenges posed by the adoption of large AI models such as LLMs across diverse applications in many different industries. It not only holds promise for edge applications that incorporate larger models, but also for lowering the cost of maintaining and updating large AI models in the cloud,” said Ehry MacRostie, a senior manager in Amazon’s Artificial General Intelligence division.

The post Blazing Past a Major AI Bottleneck appeared first on Enterprise Viewpoint.

]]>
Taking Inspiration from the Nature to Attack the Global Warming Trend https://enterpriseviewpoint.com/taking-inspiration-from-the-nature-to-attack-the-global-warming-trend/ Mon, 20 Nov 2023 14:22:37 +0000 https://enterpriseviewpoint.com/?p=15493 Although the human society is rooted in a variety of things, the most important foundation of it talks to our commitment towards getting better under all circumstances. This reality, in particular, has enabled the world to clock some huge milestones, with technology emerging as quite a major member of the group. The reason why we […]

The post Taking Inspiration from the Nature to Attack the Global Warming Trend appeared first on Enterprise Viewpoint.

]]>
Although the human society is rooted in a variety of things, the most important foundation of it talks to our commitment towards getting better under all circumstances. This reality, in particular, has enabled the world to clock some huge milestones, with technology emerging as quite a major member of the group. The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become abundantly clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as a result, initiated a full-blown tech revolution. Of course, the next thing this revolution did was to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming one new discovery ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.

The researching team at University of Maryland has successfully developed a cooling glass technology, which is designed to turn down the heat indoors without electricity. According to certain reports, the stated technology comes decked up with means to cut back on temperature by 3.5°C at noon, while simultaneously boasting an ability to reduce a mid-rise apartment building’s yearly carbon emissions by 10%. But how does the whole thing actually work? Well, the answer is neatly tucked in the glass’ coating that works in two ways. For starters, it reflects up to 99% of solar radiation to stop buildings from absorbing heat. Here, the glass emits heat in the form of longwave infrared radiation into the icy universe where the temperature is generally around -270°C, or just a few degrees above absolute zero. Another aspect of it talks to how the technology leverages “radiative cooling” to help the relevant space in acting as a heat sink for the buildings. By saying so, we mean that the mechanism uses new cooling glass design, alongside so-called atmospheric transparency window, a part of the electromagnetic spectrum that passes through the atmosphere without boosting its temperature, to effectively dump large amounts of heat into the infinite cold sky beyond. To give you some context, this is pretty much how the earth organically cools itself.

“It’s a game-changing technology that simplifies how we keep buildings cool and energy-efficient,” said Xinpeng Zhao, assistant research scientist and the first author of this study. “This could change the way we live and help us take better care of our home and our planet.”

Make no mistake; the University of Maryland’s latest brainchild isn’t the first ever cooling glass in history. However, in contrast to previous attempts at developing the technology, the latest iteration is understood to be much more environmentally stable. Such stability ensures it can stand firm against deterrents like water, ultraviolet radiation, dirt, and even flames. In case that wasn’t enough, you can also apply the cooling glass to a wide assortment of surfaces like tile, brick, and metal, thus offering a value proposition which is much more scalable and adoptable.

Talk about what orchestrated the breakthrough on a granular level, the researchers realized this feat through the integration of finely ground glass particles and their subsequent application as a binder. Now, it might not seem like that big of a detail, but this simple-looking decision ousted any role for more-pervasive but hardly-durable polymers. Furthermore, the team programmed the particle size to maximize emission of infrared heat and reflect sunlight all at the same time.

“The development of the cooling glass aligns with global efforts to cut energy consumption and fight climate change” said Liangbing Hu, a professor at the University of Maryland. He notably pointed to recent reports that this year’s Fourth of July fell on what may have been the hottest day globally in 125,000 years.

“This ‘cooling glass’ is more than a new material—it’s a key part of the solution to climate change,” said Hu. “By cutting down on air conditioning use, we’re taking big steps toward using less energy and reducing our carbon footprint. It shows how new technology can help us build a cooler, greener world.”

For the immediate future, the researching team plans on conducting further tests to better understand the technology. Next up, they are looking to introduce more practical applications of the brand-new cooling glass. Apart from that, the researchers also have one eye on commercializing the technology soon, an intention visible in the team’s decision to launch a startup company CeraCool, which will be responsible for scaling the concept.

The post Taking Inspiration from the Nature to Attack the Global Warming Trend appeared first on Enterprise Viewpoint.

]]>
A Discovery with Potential to Re-energize the Entire Battery Landscape https://enterpriseviewpoint.com/a-discovery-with-potential-to-re-energize-the-entire-battery-landscape/ Thu, 16 Nov 2023 11:28:42 +0000 https://enterpriseviewpoint.com/?p=15479 The human knowhow is well-known for being expansive beyond all limits, and yet there remains an awful little that we know better than growing on a consistent basis. This unwavering commitment towards growth, under every possible situation, has really brought the world some huge milestones, with technology emerging as quite a major member of the […]

The post A Discovery with Potential to Re-energize the Entire Battery Landscape appeared first on Enterprise Viewpoint.

]]>
The human knowhow is well-known for being expansive beyond all limits, and yet there remains an awful little that we know better than growing on a consistent basis. This unwavering commitment towards growth, under every possible situation, has really brought the world some huge milestones, with technology emerging as quite a major member of the group. The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become abundantly clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as a result, initiated a full-blown tech revolution. Of course, the next thing this revolution did was to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming one new discovery ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.

The researching team at U.S. Department of Energy’s (DOE) Argonne National Laboratory has reportedly discovered an intriguing cooperative behavior that occurs among complex mixtures of components across electrolytes in batteries. To give you some idea, electrolytes are basically materials that move charge-carrying particles known as ions between a battery’s two electrodes, thus converting stored chemical energy into electricity. Anyway, they got to know how combining two different types of anions (negatively charged ions) with cations (positively charged ions) can significantly improve the overall battery’s performance. This has birthed a belief that careful selection of ion mixtures can enable battery developers to precisely tailor their devices to produce desired performance characteristics. To understand the importance carried by such a development, we must acknowledge that lithium-ion batteries used today actually have a limited ability to provide performance attributes needed in critical applications like passenger electric vehicles and storing renewable energy on the grid. Given these limitations, researchers across the globe have started to deem multivalent batteries as a potentially better alternative. Multivalent batteries, in essence, use cations such as zinc, magnesium, and calcium that have a charge of +2 as opposed to +1 for lithium ions. Boasting a greater charge stock, multivalent batteries are able to store and release more energy. Apart from it, the stated technology use abundant elements supplied through stable, domestic supply chains, something which looks a lot better when you consider lithium is less abundant and has an expensive, volatile international supply chain. Such a setup not only makes multivalent batteries a more viable alternative for electric vehicles, but they also have a use case around grid storage.

Having covered their advantages, we must also mention that most multivalent batteries taken under investigation by researchers have failed to perform well. This is because ions and electrodes tend to be unstable and degrade, turning it difficult for electrolytes to efficiently transport cations. The said problem eventually diminishes the battery’s ability to generate and store electricity. So, how does the new discovery helps the case of multivalent batteries? As zinc metal forms is among their main foundations, the researching team made an effort to characterize the interactions that occur— and the structures that form— when zinc cations are combined with two different types of anions in the electrolyte. This effort included designing a laboratory-scale battery system comprised of an electrolyte and zinc anode. The electrolyte initially contained zinc cations and an anion called TFSI. Here, they observed very weak attraction to the cations. Next up, they instilled chloride anions to the electrolyte. Going by the available details, chloride showed a much stronger attraction to zinc cations. However, that’s not where things ended. Researchers built upon their initial findings with three complementary techniques. In the first one, they used X-ray absorption spectroscopy, which was conducted at Argonne’s Advanced Photon Source, a DOE Office of Science user facility. This one involved probing the electrolyte using synchrotron X-ray beams for the purpose of measuring the absorption of the X-rays. Then, there was Raman spectroscopy technique. Conducted at Argonne’s Electrochemical Discovery Laboratory, the said technique sprung to action by illuminating the electrolyte with laser light before evaluating the scattered light. Lastly, they put into practice Density functional theory at Argonne’s Laboratory Computing Resource Center where the team stimulated and calculated the structures formed by the interactions among the ions in the electrolyte.

“These techniques characterize different aspects of the ion interactions and structures,” said Mali Balasubramanian, a physicist on the research team and one of the study’s authors. “X-ray absorption spectroscopy probes how atoms are arranged in materials at very small scales. Raman spectroscopy characterizes the vibrations of the ions, atoms and molecules. We can use the data on atom arrangements and vibrations to determine whether ions are separated or move together in pairs or clusters. Density functional theory can corroborate these characterizations through powerful computation.”

Owing to their extensive investigation, the researchers were able to figure out that the presence of chloride induced TFSI anions to pair with zinc cations. This marked a significant point, as the pairing of anions with a cation can affect the rate at which the cation can be deposited as metal on the anode during charging. Not just that, it can also have a similar impact when the cation is being stripped back into an electrolyte during discharge. For the sake of re-confirming their findings, the researchers repeated these experiments with two other ion mixtures. In the first ion mixture, they swapped chloride for bromide ions, whereas the other one saw them picking iodide ions over chloride. The results included bromide and iodide’s success in inducing TFSI anions to pair with zinc cations.

“What was particularly exciting about this result is that we didn’t expect to see what we saw. The idea that we can use one anion to draw a second anion closer to a cation was very surprising,” said Justin Connell, a materials scientist on the research team and one of the study’s authors.

Although the study seems pretty significant as a whole, one area where the team placed a special emphasis talked to the cooperation which occurred among different types of ions in an electrolyte. In simple terms, it meant the presence of the weakly attracting anions reduced the amount of energy needed to pull zinc metal out of solution. On the other hand, the presence of strongly attracting anions reduced the amount of energy needed to put the zinc back in solution. Such a coordination mandated less energy to facilitate a constant flow of electrons.

“Our observations highlight the value of exploring the use of different anion mixtures in batteries to fine-tune and customize their interactions with cations,” said Connell. “With more precise control of these interactions, battery developers can enhance cation transport, increase electrode stability and activity, and enable faster, more efficient electricity generation and storage. Ultimately, we want to learn how to select the optimal combinations of ions to maximize battery performance.”

For the future, the plan is to investigate the potential of other multivalent cations like magnesium and calcium interacting with various anion mixtures. Beyond that, the researchers will also dabble with machine learning to rapidly calculate the interactions, structures and electrochemical activity that occur in and around many different ion combinations. The latter approach, if found feasible, should accelerate the selection of most promising combinations.

 

The post A Discovery with Potential to Re-energize the Entire Battery Landscape appeared first on Enterprise Viewpoint.

]]>
Making AI Catalyst for an Upgrade of Our Design and Manufacturing Space https://enterpriseviewpoint.com/making-ai-catalyst-for-an-upgrade-of-our-design-and-manufacturing-space/ Tue, 14 Nov 2023 16:53:42 +0000 https://enterpriseviewpoint.com/?p=15415 Over the years, many different traits have tried to define human beings in their own unique manner, and yet none have done a better job than our trait of improving at a consistent pace. This unwavering commitment towards growth, under all possible circumstances, has brought the world some huge milestones, with technology emerging as quite […]

The post Making AI Catalyst for an Upgrade of Our Design and Manufacturing Space appeared first on Enterprise Viewpoint.

]]>
Over the years, many different traits have tried to define human beings in their own unique manner, and yet none have done a better job than our trait of improving at a consistent pace. This unwavering commitment towards growth, under all possible circumstances, has brought the world some huge milestones, with technology emerging as quite a major member of the group. The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become abundantly clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as a result, initiated a full-blown tech revolution. Of course, the next thing this revolution did was to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming one new AI development ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.

Autodesk has officially announced the launch of Autodesk AI, which is designed to unlock creativity, solve problems, and eliminate non-productive work across the industries that design and make the world around us. Available across the wider Autodesk portfolio, the stated solution comes well-equipped with an ability to deliver at your disposal intelligent assistance and generative capabilities that allow customers to imagine and explore freely, while simultaneously producing precise, accurate, and innovative results. Talk about how this will happen on a more granular level, the answer is rooted in a collection of dedicated sub-solutions present within the product. For instance, when we bring up architecture, engineering, and construction industry, Autodesk AI will begin its value proposition from Autodesk Forma. This one can effectively provide rapid wind, noise, and operational energy analysis so to help you conduct smart early-stage planning and take design decisions that improve outcomes rather meaningfully. Next up, we have the prospect of InfoGraphic, a Machine Learning Deluge Tool bearing the responsibility to offer feedback on the best placement for retention ponds and swales. Such functionality should help users in preventing or reducing the impact of water disasters. Moving on, then there is AutoCAD, which leverages artificial intelligence to help drafters iterate faster through handwritten notes and digital markups. The idea around AutoCAD is to determine the intent of the user to recommend context-aware actions for easily incorporating changes. The last bit of prominent details across this space comes from Construction IQ, a tool meant to again use AI to predict, prevent, and manage construction risks that might impact quality, safety, cost, or schedule.

The discipline we will now get into is of product design and manufacturing, where Autodesk will use its Blank AI acquisition to enable conceptual design exploration for the automotive industry. By doing so, it will birth accelerated outcomes, alongside 3D models that can be rapidly created, explored, and edited in real time using semantic controls and natural language, and guess what, you won’t need any advanced technical skills whatsoever around here. Another way through which Autodesk AI will enhance design and manufacturing space is through Autodesk Fusion, which allows customers to automatically generate product designs that are optimized for manufacturing method, performance, cost, and more. Furthermore, Fusion workflows are being specifically conceived to ensure automated creation of templatized Computer-Aided Manufacturing toolpaths that can be adjusted by the user as needed. Complimenting the same are automated drawings that will provide interactive experiences in sheet creation, view placement, and annotation workflows.

“As the trusted technology partner for Design and Make industries, Autodesk sees AI as a way for our customers to tackle the challenges they face and turn them into opportunities,” said Andrew Anagnost, President and Chief Executive Officer at Autodesk. “AI is the future of design and make, and Autodesk is pioneering this transition. We sit at the junction of many of the most creative and impactful industries in the world. We’ll continue to invest in AI because of its transformational potential to drive better outcomes for our customers’ businesses and the world.”

Rounding up the highlights is what Autodesk AI has on the offer for media and entertainment industry. For starters, the product banks upon generative scheduling capabilities in Autodesk Flow to automate scheduling for media and entertainment productions, doing so on an actionable note by managing the constantly shifting variables between teams and budgets. Notably, this generative scheduling approach produces results in minutes for a process which has traditionally taken days at a time. Given the dramatic difference, teams can predict, plan, and right-size resources to ensure creative bandwidth wherever needed. Turning our attention to Autodesk Flame, this one knows a thing or two about automating manual tasks such as keying, sky replacement, beauty work, and camera tracking for artists. These people can also expect to interact with the company’s 3D animation software called Maya and access its scene data using natural language text prompts. To make the offering all the more significant, Autodesk has also collaborated with Wonder Dynamics, a collaboration where AI will power a Maya plug-in to automatically animate, light, and compose computer-generated characters for live-action scenes.

The entire development provides an interesting follow-up to one recent State of Design and Make special report on AI, which claimed that out of all the companies surveyed, 77% revealed that they are planning to increase or strongly increase investment in AI over the course of next three years. As for all the leaders who were surveyed, 66% agree that in two to three years AI will be essential. But what makes Autodesk an ideal candidate to make the most of this raging trend? Well, apart from all the AI-driven solutions, the company’s credentials are also markedly rooted in the fact that it has, till date, published more than 60 peer-reviewed research papers advancing the state of the art in AI and generative AI.

“With a commitment to security and ethical AI practices, we’re focused on delivering responsible AI solutions that address our customers’ needs,” said Raji Arasu, Chief Technology Officer at Autodesk. “Autodesk AI will continue to surface across the Autodesk platform–both in our existing products and our industry clouds–to enable better ways of designing and making.”

The post Making AI Catalyst for an Upgrade of Our Design and Manufacturing Space appeared first on Enterprise Viewpoint.

]]>
Setting a More Efficient Tone for the Evolving Semiconductor Industry https://enterpriseviewpoint.com/setting-a-more-efficient-tone-for-the-evolving-semiconductor-industry/ Fri, 10 Nov 2023 10:15:42 +0000 https://enterpriseviewpoint.com/?p=15410 Surely, you can try and define human beings in many different ways, but the best way you can do so is by digging into their tendency of getting better on a consistent basis. This tendency, in particular, has really brought the world some huge milestones, with technology emerging as quite a major member of the […]

The post Setting a More Efficient Tone for the Evolving Semiconductor Industry appeared first on Enterprise Viewpoint.

]]>
Surely, you can try and define human beings in many different ways, but the best way you can do so is by digging into their tendency of getting better on a consistent basis. This tendency, in particular, has really brought the world some huge milestones, with technology emerging as quite a major member of the group. The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become abundantly clear how the whole runner was also very much inspired from the manner in which we applied those skills across a real world environment. The latter component was, in fact, what gave the creation a spectrum-wide presence, and as a result, initiated a full-blown tech revolution. Of course, the next thing this revolution did was to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming one new discovery ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.

The researching team at US Department of Energy’s Center for Functional Nanomaterials (CFN) has successfully developed a new light-sensitive, organic inorganic hybrid material that enables high-performance patternability by EUV lithography. To understand the significance of such a development, we must start by acknowledging how, with semiconductor feature sizes now approaching only a few nanometers; it has become enormously challenging to sustain this persistent device miniaturization. The stated challenge has got our semiconductor industry to adopt a relatively more powerful fabrication method i.e. extreme ultraviolet (EUV) lithography. In case you are looking for some context, EUV lithography employs light that is only 13.5 nanometers in wavelength to form tiny circuit patterns in a photoresist, the light-sensitive material integral to the lithography process. This photoresist is essentially the template for forming the nanoscale circuit patterns in the silicon semiconductor.  However, as we continue our progression towards more advanced but complicated systems, scientists across the globe are now pitted against a challenge of identifying the most effective resist materials. Enter CFN’s latest brainchild. Made from hybrid materials, these new photoresists are composed of both organic materials (those that primarily contain carbon and oxygen atoms) and inorganic materials (those usually based on metallic elements). Furthermore, both parts of the hybrid host their own unique chemical, mechanical, optical, and electrical properties due to their unique chemistry and structures. Hence, upon combining such individually-substantial components, new hybrid organic-inorganic materials are born, materials that boast their own interesting properties. You, the result happens to be a material which is more sensitive to EUV light, meaning it doesn’t need to be exposed to as much EUV light during patterning. Such facility should make a sizeable cut back on process time. Not just that, the new hybrid material also has an improved mechanical and chemical resistance, thus offering a far better value proposition as templates for high-resolution etching.

“To synthesize our new hybrid resist materials, organic polymer materials are infused with inorganic metal oxides by a specialized technique known as vapor-phase infiltration. This method is one of the key areas of materials synthesis expertise at CFN. Compared to conventional chemical synthesis, we can readily generate various compositions of hybrid materials and control their material properties by infusing gaseous inorganic precursors into a solid organic matrix,” said Chang-Yong Nam, a materials scientist at CFN who led the project.

An intriguing detail related to the said development is a change in precursor used for the metal. Rather than banking upon aluminum like they did during previous efforts, the team leveraged indium as an inorganic component. In practice, they made the new resist using a poly (methyl methacrylate) (PMMA) organic thin film as the organic component and infiltrated it with inorganic indium oxide. By doing so, they were able to achieve improved uniformity in subsequent patterning.

That being said, EUV patterning remains a largely inaccessible commodity for now, and that is because of the costs involved.

“It’s currently really hard to do EUV patterning,” said Nam. “The actual patterning machine that industry is using is very, very expensive—the current version is more than $200 million per unit. There are only three to four companies in the world that can use it for actual chip manufacturing. There are a lot of researchers who want to study and develop new photoresist materials but can’t perform EUV patterning to evaluate them. This is one of the key challenges we hope to address.”

In terms of the researching team’s immediate plans with the technology, though, it has already started the work on other hybrid material compositions. The intention is to also make some headway around the processes involved in fabricating them, thus better positioning the industry to pattern smaller, more efficient, and sustainable semiconductor devices.

 

The post Setting a More Efficient Tone for the Evolving Semiconductor Industry appeared first on Enterprise Viewpoint.

]]>
Generating Higher Awareness to Transform the Robotics’ Utility https://enterpriseviewpoint.com/generating-higher-awareness-to-transform-the-robotics-utility/ Tue, 07 Nov 2023 13:02:58 +0000 https://enterpriseviewpoint.com/?p=15406 Over the years, many different traits have tried to capture the psyche of human beings, but to be honest; none have done a better job than our trait of improving at a consistent pace. This factor, on its part, has enabled the world to clock some huge milestones, with technology emerging as quite a major […]

The post Generating Higher Awareness to Transform the Robotics’ Utility appeared first on Enterprise Viewpoint.

]]>
Over the years, many different traits have tried to capture the psyche of human beings, but to be honest; none have done a better job than our trait of improving at a consistent pace. This factor, on its part, has enabled the world to clock some huge milestones, with technology emerging as quite a major member of the stated group. The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become abundantly clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as a result, initiated a full-blown tech revolution. Of course, the next thing this revolution did was to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming one new discovery ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.

The researching teams at Massachusetts Institute of Technology and Computer Science and Artificial Intelligence Laboratory (CSAIL) have successfully developed a system called Feature Fields for Robotic Manipulation (F3RM), which is designed to blend 2D images with foundation model features and create 3D scenes that help robots identify and grasp nearby items. According to certain reports, F3RM comes decked up with an ability to interpret open-ended language prompts from humans, an ability making it extremely useful around real-world environments that contain thousands of objects like warehouses and households. But how does the system work on a more granular level? Well, the proceedings begin from F3RM taking picture on a given selfie stick. Mounted on the stated stick, the camera snaps 50 images at different poses, enabling it to build a neural radiance field (NeRF), a deep learning method which takes 2D images to construct a 3D scene. The resulting RGB photos collage creates a “digital twin” of its surroundings in what is a 360-degree representation that shows what’s nearby. Apart from the highly detailed neural radiance field, F3RM also builds a feature field to augment geometry with semantic information. You see, it uses CLIP, a vision foundation model trained on hundreds of millions of images to efficiently learn visual concepts. Hence, with the pictures’ 2 features reconstructed in an enhanced form, F3RM effectively lifts the 2D features into a 3D representation.

“Visual perception was defined by David Marr as the problem of knowing ‘what is where by looking,'” said Phillip Isola, senior author on the study, MIT associate professor of electrical engineering and computer science, and CSAIL principal investigator. “Recent foundation models have gotten really good at knowing what they are looking at; they can recognize thousands of object categories and provide detailed text descriptions of images. At the same time, radiance fields have gotten really good at representing where stuff is in a scene. The combination of these two approaches can create a representation of what is where in 3D.”

Having lifted the 2D features into a 3D representation, it’s time for us to understand how the new system helps in actually controlling the objects. Basically, after receiving a few demonstrations, the robot applies what it knows about geometry and semantics to grasp objects it has never encountered before. As a result, when a user submits a text query, the robot searches through the space of possible grasps to identify those most likely to succeed in picking up the object requested by user. During this selection process, each potential option is scored based on its relevance to the prompt, similarity to the demonstrations the robot has been trained on, and if it causes any collisions. Based on those scores, the robot picks the most suitable course of action. In case that somehow wasn’t impressive enough, then it must be mentioned how F3RM further enables users to specify which object they want the robot to handle at different levels of linguistic detail. To explain it better, if there is a metal mug and a glass mug, the user can ask the robot for the “glass mug.” Hold on, we aren’t done, as even if both the mugs are of glass and and one of them is filled with coffee and the other with juice, the user can ask the robot for “glass mug with coffee.”

“If I showed a person how to pick up a mug by the lip, they could easily transfer that knowledge to pick up objects with similar geometries such as bowls, measuring beakers, or even rolls of tape. For robots, achieving this level of adaptability has been quite challenging,” said William Shen, Ph.D. student at MIT, and co-lead author on the study.”F3RM combines geometric understanding with semantics from foundation models trained on internet-scale data to enable this level of aggressive generalization from just a small number of demonstrations.”

In a test conducted on the system’s ability to interpret open-ended requests from humans, the researchers prompted the robot to pick up Baymax, a character from Disney’s “Big Hero 6.” Interestingly, F3RM had never been directly trained to pick up a toy of the cartoon superhero, but the robot was successful in leveraging its spatial awareness and vision-language features from the foundation models to decide which object to grasp and how to pick it up.

“Making robots that can actually generalize in the real world is incredibly hard,” said Ge Yang, postdoc at the National Science Foundation AI Institute for Artificial Intelligence and Fundamental Interactions. “We really want to figure out how to do that, so with this project, we try to push for an aggressive level of generalization, from just three or four objects to anything we find in MIT’s Stata Center. We wanted to learn how to make robots as flexible as ourselves, since we can grasp and place objects even though we’ve never seen them before.”

 

The post Generating Higher Awareness to Transform the Robotics’ Utility appeared first on Enterprise Viewpoint.

]]>
Giving the World a More Advanced Set of Computational Powers https://enterpriseviewpoint.com/giving-the-world-a-more-advanced-set-of-computational-powers/ Fri, 03 Nov 2023 11:13:21 +0000 https://enterpriseviewpoint.com/?p=15390 Human beings might have the means to do a lot, but that doesn’t change how we barely do anything better than growing on a consistent basis. This relentless commitment towards achieving an improved version of ourselves, in every possible situation, has brought the world some huge milestones, with technology emerging as quite a major member […]

The post Giving the World a More Advanced Set of Computational Powers appeared first on Enterprise Viewpoint.

]]>
Human beings might have the means to do a lot, but that doesn’t change how we barely do anything better than growing on a consistent basis. This relentless commitment towards achieving an improved version of ourselves, in every possible situation, has brought the world some huge milestones, with technology emerging as quite a major member of the group. The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become abundantly clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as a result, initiated a full-blown tech revolution. Of course, the next thing this revolution did was to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming one new discovery ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.

The researching teams at Massachusetts Institute of Technology and NVIDIA have successfully developed two techniques that are meant to accelerate the processing of sparse tensors, a type of data structure that’s used for high-performance computing tasks. According to certain reports, the techniques in question will bring significant improvements to the performance and energy-efficiency of systems like the massive machine-learning models that drive generative artificial intelligence. Talk about how they will do so, the answer is rooted in exploiting sparsity, zero values in the tensors. You see, given these values have no meaningful role whatsoever; one can just skip over them and save on both computation and memory. This way it becomes possible to compress the tensor and allow larger portion to be stored in on-chip memory. That being said, there is also a reason why it hasn’t been achieved so far. For starters, finding the non-zero values in a large tensor is no easy task, as existing approaches often limit the locations of non-zero values by enforcing a sparsity pattern to simplify the search. With a pre-defined pattern now in place, the variety of tensors that can be processed efficiently becomes too thin. Another challenge in play here is how the number of non-zero values can actually vary in different regions of the tensors. Such variance makes it difficult to determine how much space is required to store different regions in memory. To overcome the problem in question, more space than necessary is often allocated, which in turn, forces the storage buffer to be underutilized. Extending the ripple effect is a notable increase in off-chip memory traffic that, of course, requires extra computation. But how did MIT and NVIDIA researchers solve this conundrum? Out of their two techniques, they got the first one to efficiently find the non-zero values for a wider variety of sparsity patterns. In second solution’s case, they created a method that can handle the case where the data doesn’t fit in memory. The stated focus is intended to go a long way when it comes to increasing the utilization of the storage buffer, while simultaneously reducing off-chip memory traffic. Although a tad different in their function, both methods boost the performance and reduce the energy demands of hardware accelerators.

“Typically, when you use more specialized or domain-specific hardware accelerators, you lose the flexibility that you would get from a more general-purpose processor, like a CPU. What stands out with these two works is that we show that you can still maintain flexibility and adaptability while being specialized and efficient,” said Vivienne Sze, associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), a member of the Research Laboratory of Electronics (RLE), and co-senior author of papers on both advances.

In reference to hardware accelerators, the researching teams have also developed a dedicated and improved iteration of the same. Named HighLight, the accelerator can handle a wide variety of sparsity patterns and still perform well when running models that don’t have any zero values. It delivers on this promised value proposition through “hierarchical structured sparsity”, which represents a wide variety of sparsity patterns that are actually just made from multiple simple sparsity patterns. Here, they divide the values in a tensor into smaller blocks, where each block has its own simple, sparsity pattern (perhaps two zeros and two non-zeros in a block with four values). Next up, they combine the blocks into a hierarchy. One can continue to combine blocks into larger levels, but the patterns remain simple at each step. This sort of capability enables us to find and skip zeros, meaning it can also effectively root out the problem of excess computation. If we are strictly talking numbers, then we must mention that the accelerator design is said to be six times more energy-efficient than other approaches.

“In the end, the HighLight accelerator is able to efficiently accelerate dense models because it does not introduce a lot of overhead, and at the same time it is able to exploit workloads with different amounts of zero values based on hierarchical structured sparsity,” said Yannan Nellie Wu, co-lead author on the study.

For the future, the researching teams at MIT and NVIDIA plan to apply hierarchical structured sparsity to more types of machine-learning models and different types of tensors in those models.

The post Giving the World a More Advanced Set of Computational Powers appeared first on Enterprise Viewpoint.

]]>
Setting a New Dash Cam Benchmark https://enterpriseviewpoint.com/setting-a-new-dash-cam-benchmark/ Wed, 01 Nov 2023 14:18:25 +0000 https://enterpriseviewpoint.com/?p=15385 Surely, we have proven ourselves to be good at a host of different things, but to be completely honest, the one area where are best at is growing on a consistent basis. This unwavering commitment towards improving, no matter the situation, has brought the world some huge milestones, with technology emerging as quite a major […]

The post Setting a New Dash Cam Benchmark appeared first on Enterprise Viewpoint.

]]>
Surely, we have proven ourselves to be good at a host of different things, but to be completely honest, the one area where are best at is growing on a consistent basis. This unwavering commitment towards improving, no matter the situation, has brought the world some huge milestones, with technology emerging as quite a major member of the stated group. The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become abundantly clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as result, initiated a full-blown tech revolution. Of course, the next thing this revolution did was to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming one new discovery ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.

Nextbase, the global leader in dash cam technology, has officially introduced its latest product in Nextbase IQ, which is one smart and fully connected dash cam designed for all vehicles. Understood to be the only cam offering up to 4K resolution and a built-in interior cabin camera together, Nextbase IQ comes decked up with AI-powered technology and 4G IoT connectivity for real-time access from anywhere at any time. Having both these technologies in play should also help customers when it comes to anticipating, preventing, and defending against incidents on the road and while their vehicle is parked. Talk about Nextbase IQ on a slightly deeper, we referred to its use case in parking, the dash cam builds upon that by leveraging its proximity sensing Spatial Awareness and G-Force Sensors to scan the area surrounding your vehicle. In case they detect any potential danger, the sensors immediately send to your smartphone a real-time notification that will include imagery and video relating to the source of concern. Complimenting the same is the cam’s Witness Mode, which can be activated using voice to instantly save a 30-minute block of video to the cloud and then push a notification to an emergency contact in real-time, The idea here is make sure that you don’t have to face a stressful situation alone, or at least, without evidence. Not just one emergency contact, the product also automatically alerts emergency services, in the event of an accident, with location and other potentially life-saving details. The dash cam even has the means to actively determine, on its own, the current status of a vehicle, meaning whether it is on the move or just happens to be parked. Boasting separate and dedicated responses for both the events, Nextbase IQ is able to scan for threats effectively, while simultaneously preserving battery for extended use.

“Nextbase iQ is unlike anything on the market today,” said Richard Browning, Chief Marketing and Sales Officer for Nextbase. “It sets a new benchmark for dash cams. But more than that, it creates a whole new category by extending high-end smart home-style connectivity into the car and making advanced connected-car technology – on par or even better than that offered on the most expensive, tech-forward new cars – available to everyone, regardless of the age, type or value of their vehicle.”

Despite the product still being new, Nextbase has already penciled in a set of features that will be integrated into its dash cam over the coming months. Among the upcoming features, there is a Guardian Mode. Guardian Mode will notify vehicle owners/parents based on specific customizable driving behavior triggers, such as speeding, erratic driving, and GPS boundaries, to keep you alerted when your vehicle is left with a valet, mechanic or family member. Next up., we must dig into a Push to Talk feature, which is meant to let you speak directly with occupants in the vehicle, as well as alert intruders to your presence. Moving on, Nextbase will be bringing a Roadwatch AI function that is going to bank upon Computer Vision (CV) chipset and better vision to track the speeds and trajectories of other vehicles around you, including cyclists, e-scooters, and pedestrians. Hold on, we are still not done. Nextbase has also partnered with T-Mobile to generate   IoT, real time footage and reports before seamlessly sharing them with authorities or insurance companies following any incident, thus protecting drivers and their vehicles. These encrypted reports will have synchronized data covering speed, G-Force, GPS and video.  Rounding up the highlights are AI-powered DMS and ADAS capabilities. The former will be responsible for monitoring and improving drivers’ situational awareness so to cut back on distracted driving, whereas the latter will focus on enhancing safe operation of the vehicle and enabling drivers to respond accordingly.

At launch, Nextbase has made available three different iterations of Nextbase IQ. Firstly, there is 1k, a model with 1080p resolution available at $499.99. The second model, 2K, will hold 1440p worth resolution while costing $599.99. The last model, 4k, will be the one offering 4k resolution for $699.99.

All these models are also supported by a versatile set of subscription packages, where the free package will get you an access to the iQ app, voice control, and real time notifications. The next plan, priced at $9.99 monthly, will be given everything the previous package had, alongside smart sense parking, witness mode, Roadwatch AI, guardian mode, remote alarm, and a cloud storage good for 30 days. The last plan, of course, has all the stated features, but it set itself apart by offering cloud storage that is good for 180 days, an option to have multiple user accounts, emergency SOS, automated incident back-up, and extended warranty.

The post Setting a New Dash Cam Benchmark appeared first on Enterprise Viewpoint.

]]>
Harnessing AI to Give Your Journeys a More Informed Theme https://enterpriseviewpoint.com/harnessing-ai-to-give-your-journeys-a-more-informed-theme/ Mon, 30 Oct 2023 11:17:59 +0000 https://enterpriseviewpoint.com/?p=15381 There isn’t much that has remained out of our reach, and yet there still remains little that we do better than growing on a consistent basis. This commitment towards getting better, no matter the circumstances, has enabled the world to clock some huge milestones, with technology emerging as quite a major member of the group. […]

The post Harnessing AI to Give Your Journeys a More Informed Theme appeared first on Enterprise Viewpoint.

]]>
There isn’t much that has remained out of our reach, and yet there still remains little that we do better than growing on a consistent basis. This commitment towards getting better, no matter the circumstances, has enabled the world to clock some huge milestones, with technology emerging as quite a major member of the group. The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as a result, initiated a full-blown tech revolution. Of course, the next thing this revolution did was to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming Google’s latest move ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.

Google has officially announced the launch of various new AI features that are geared towards helping Google Maps provide more immersive navigation, easier-to-follow driving directions, and better organized search results. For starters, the tech behemoth has updated the application’s search function to make it easier to find specific things near you. In practice, when you search for something, you will find specific photo results of what you are looking for. These photos come after AI and advanced image recognition models link-up to analyze the picture content shared by other users on Google Maps. Next up, the navigation platform will help you figure out whether the EV charging station you are planning to sop by is actually working or not. You see, according to studies, nearly 25 percent of chargers are down or inoperable at any given time. Hence, Google Maps will now be able to inform you on when a charger was last used, considering if the station was used a few hours ago, chances are it’s working. On the other hand, if it’s been a few days or weeks since it was last used, the indication might be to find another charger. Complimenting the same are more EV details such as charger’s compatibility with your EV and whether it’s fast, medium, or slow. Hold on, there is more for all EV owners and car companies. You see, Google has also made a call to offer updated Places APIs to help build out better features for cars with navigation systems based on Google Maps. As a result, car companies can use the Places API to build out more EV charging information so their customers can see real-time location information, plug type, and charging speeds directly on their vehicle’s infotainment screens. Talk about improving the driver’s experience, the search engine giant further solidifies that pledge through Immersive View, which was first announced earlier this year. Immersive View, for anyone who doesn’t know, brings a 3D view of a place to help users see where they’re supposed to go, while also offering other tidbits of information, like local business locations, weather, and traffic.

“AI has really supercharged the way we map,” said Chris Phillips, vice president and general manager of Geo, the team at Google that includes all of its geospatial location mapping products. “It plays a key role in everything from helping you navigate, [helping you] commute, discover new restaurants, where to go, when to go. These are all really important decisions that people are making all the time.”

Another detail worth a mention would be how Google is rebranding its augmented reality feature “Search with Live View” to “Lens in Maps”. Basically, you can use this feature by tapping on the Lens in the search bar. Then, you must hold up your camera to find information about the nearest train stations, coffee shops, ATMs, or whatever happens to be in close proximity of your location. Coming back to in-car capabilities, Google is set to bring HOV lanes for US drivers, a feature workable on both Android and iOS devices, as well as in cars with Google built-in. Alongside HOV lanes, you can also expect a lowdown on the  speed limit information. Rounding up these highlights is the new look in which the whole value proposition will be presented. By new look, we mean updated colors, more realistic buildings, and improved lane details for tricky highway exits. The stated makeover will be accessible across 12 countries, including the US, Canada, France, and Germany.

“The foundation of all the work we do is to build the most comprehensive, fresh, accurate information to represent the real world,” Phillips said. “This is key for us, and we like to talk about the map as being alive.”

The post Harnessing AI to Give Your Journeys a More Informed Theme appeared first on Enterprise Viewpoint.

]]>