Enterprise Information Integration (2024)

Enterprise Information Integration (1)

Web Design and DevelopmentStandards and ProtocolsEnterprise Information IntegrationTechnologyDatabasesData AdministrationHuman ResourcesWorkflowMaster Data ManagementData CentersIntranetSearchingInformation Retrieval

AI News by Category

Check blogSoftwareRead news

3 days ago

Luka

Web Design and DevelopmentStandards and ProtocolsEnterprise Information IntegrationTechnology

Oxylabs introduces the first AI copilot for public web data collection

Oxylabs Launches OxyCopilot to Revolutionize Web Data Collection

September 27, 2024 6:44:58PM

AZoRobotics.com

OxylabsSep 27 2024

Oxylabs, a premium web intelligence collection platform, announced the launch of a new AI-driven solution, OxyCopilot, that is designed to help its users save time and money spent on complex web data acquisition tasks. The industry's first artificial intelligence (AI) assistant for scraping is a part of Oxylabs' Web Scraper API, an all-in-one public web data collection platform.

Built on a combination of AI and proprietary Oxylabs' technology, OxyCopilot will enable both web scraping professionals and those who have little experience in web data collection to generate instant parsing instructions and requests for the Web Scraper API using nothing more than a URL and natural language prompts.

"In Summer, together with Censuswide, we carried out a wide survey of developers and web scraping practitioners in the UK and US. The data showed that 74% of businesses faced an increasing demand for public web data over the last year. Unfortunately, building the necessary infrastructure and maintaining data parsers remains a heavy challenge for many companies -- a proper parsing process alone can take up to 40 development hours per week for the tech teams. With OxyCopilot, we aimed to help our clients collect the data they need more easily", says Julius Cerniauskas, CEO at Oxylabs.

According to Cerniauskas, the newly-released AI assistant helps to level the playing field. Previously, the barrier to entry for smaller companies was tough because they needed to hire a team of web scraping professionals that are hard to find on the market. By saving development hours spent on repetitive web scraping tasks, businesses can devote more attention to data quality, analysis, and innovation.

Moreover, using Oxylabs' unified web scraping platform, they don't need to maintain costly infrastructure, such as servers -- a challenge that 61% of professionals identify as the most pressing when collecting data on a large scale.

"We always aspired to be industry leaders through innovation, and taking a step further in the field of AI was a natural business decision for us. OxyCopilot is unique in its design, and we are currently patenting technological implementations behind it. Most importantly, it functions as a part of a broader platform that includes other AI-powered solutions, from proxy management to web unblocking. With the help of AI machine learning (ML), we are moving towards automating the entire public web data collection process", adds Cerniauskas.

Oxylabs holds over 100 patents globally, part of them covering technologies that have AI and ML implementations. In 2023, the company obtained ISO/IEC 27001:2017 standard for excellence in information security management.

You can read more about building the industry's first web scraping AI copilot here.

Source:

Oxylabs

Read more

View original article

September 27, 2024 6:44:58PM

AZoRobotics.com

Similar articles from other sources

Show(4)

Oxylabs introduces the first AI copilot for public web data collection

Oxylabs, a premium web intelligence collection platform, announced the launch of a new AI-driven solution, OxyCopilot, that is designed to help its users save time and money spent on complex web data acquisition tasks. The industry's first artificial intelligence (AI) assistant for scraping is a part of Oxylabs' Web Scraper API, an all-in-one public web data collection platform.

Built on a combination of AI and proprietary Oxylabs' technology, OxyCopilot will enable both web scraping professionals and those who have little experience in web data collection to generate instant parsing instructions and requests for the Web Scraper API using nothing more than a URL and natural language prompts.

"In Summer, together with Censuswide, we carried out a wide survey of developers and web scraping practitioners in the UK and US. The data showed that 74% of businesses faced an increasing demand for public web data over the last year. Unfortunately, building the necessary infrastructure and maintaining data parsers remains a heavy challenge for many companies -- a proper parsing process alone can take up to 40 development hours per week for the tech teams. With OxyCopilot, we aimed to help our clients collect the data they need more easily", says Julius Cerniauskas, CEO at Oxylabs.

Marketing Technology News: MarTech Interview with Marc Holmes, CMO @ HashiCorp

According to Cerniauskas, the newly-released AI assistant helps to level the playing field. Previously, the barrier to entry for smaller companies was tough because they needed to hire a team of web scraping professionals that are hard to find on the market. By saving development hours spent on repetitive web scraping tasks, businesses can devote more attention to data quality, analysis, and innovation.

Moreover, using Oxylabs' unified web scraping platform, they don't need to maintain costly infrastructure, such as servers -- a challenge that 61% of professionals identify as the most pressing when collecting data on a large scale.

Marketing Technology News: Harnessing Intent Data to Drive Effective GTM Strategies

"We always aspired to be industry leaders through innovation, and taking a step further in the field of AI was a natural business decision for us. OxyCopilot is unique in its design, and we are currently patenting technological implementations behind it. Most importantly, it functions as a part of a broader platform that includes other AI-powered solutions, from proxy management to web unblocking. With the help of AI machine learning (ML), we are moving towards automating the entire public web data collection process", adds Cerniauskas.

Oxylabs holds over 100 patents globally, part of them covering technologies that have AI and ML implementations. In 2023, the company obtained ISO/IEC 27001:2017 standard for excellence in information security management.

Read more

View original article

September 27, 2024 8:24:21AM

MarTech Series

Oxylabs introduces the first AI copilot for public web data collection

VILNIUS, Lithuania, Sept. 26, 2024 (GLOBE NEWSWIRE) -- Oxylabs, a premium web intelligence collection platform, announced the launch of a new AI-driven solution, OxyCopilot, that is designed to help its users save time and money spent on complex web data acquisition tasks. The industry's first artificial intelligence (AI) assistant for scraping is a part of Oxylabs' Web Scraper API, an all-in-one public web data collection platform.

Built on a combination of AI and proprietary Oxylabs' technology, OxyCopilot will enable both web scraping professionals and those who have little experience in web data collection to generate instant parsing instructions and requests for the Web Scraper API using nothing more than a URL and natural language prompts.

"In Summer, together with Censuswide, we carried out a wide survey of developers and web scraping practitioners in the UK and US. The data showed that 74% of businesses faced an increasing demand for public web data over the last year. Unfortunately, building the necessary infrastructure and maintaining data parsers remains a heavy challenge for many companies -- a proper parsing process alone can take up to 40 development hours per week for the tech teams. With OxyCopilot, we aimed to help our clients collect the data they need more easily", says Julius Cerniauskas, CEO at Oxylabs.

According to Cerniauskas, the newly-released AI assistant helps to level the playing field. Previously, the barrier to entry for smaller companies was tough because they needed to hire a team of web scraping professionals that are hard to find on the market. By saving development hours spent on repetitive web scraping tasks, businesses can devote more attention to data quality, analysis, and innovation.

Moreover, using Oxylabs' unified web scraping platform, they don't need to maintain costly infrastructure, such as servers -- a challenge that 61% of professionals identify as the most pressing when collecting data on a large scale.

"We always aspired to be industry leaders through innovation, and taking a step further in the field of AI was a natural business decision for us. OxyCopilot is unique in its design, and we are currently patenting technological implementations behind it. Most importantly, it functions as a part of a broader platform that includes other AI-powered solutions, from proxy management to web unblocking. With the help of AI machine learning (ML), we are moving towards automating the entire public web data collection process", adds Cerniauskas.

Oxylabs holds over 100 patents globally, part of them covering technologies that have AI and ML implementations. In 2023, the company obtained ISO/IEC 27001:2017 standard for excellence in information security management.

You can read more about building the industry's first web scraping AI copilot here.

About Oxylabs

Established in 2015, Oxylabs is a web intelligence platform and premium proxy provider, enabling companies of all sizes to utilize the power of big data. Constant innovation, an extensive patent portfolio, and a focus on ethics have allowed Oxylabs to become a global leader in the web intelligence collection industry and forge close ties with dozens of Fortune Global 500 companies. In 2022, 2023 and 2024, Oxylabs was named Europe's fastest-growing web intelligence acquisition company in the Financial Times' FT 1000 list. For more information, please visit: https://oxylabs.io/

Media Contact:

Vytautas Kirjazovas

[email protected]

Read more

View original article

September 26, 2024 10:03:56PM

StreetInsider.com

Oxylabs introduces the first AI copilot for public web data collection

VILNIUS, Lithuania, Sept. 26, 2024 (GLOBE NEWSWIRE) -- Oxylabs, a premium web intelligence collection platform, announced the launch of a new AI-driven solution, OxyCopilot, that is designed to help its users save time and money spent on complex web data acquisition tasks. The industry's first artificial intelligence (AI) assistant for scraping is a part of Oxylabs' Web Scraper API, an all-in-one public web data collection platform.

Built on a combination of AI and proprietary Oxylabs' technology, OxyCopilot will enable both web scraping professionals and those who have little experience in web data collection to generate instant parsing instructions and requests for the Web Scraper API using nothing more than a URL and natural language prompts.

"In Summer, together with Censuswide, we carried out a wide survey of developers and web scraping practitioners in the UK and US. The data showed that 74% of businesses faced an increasing demand for public web data over the last year. Unfortunately, building the necessary infrastructure and maintaining data parsers remains a heavy challenge for many companies - a proper parsing process alone can take up to 40 development hours per week for the tech teams. With OxyCopilot, we aimed to help our clients collect the data they need more easily", says Julius Cerniauskas, CEO at Oxylabs.

According to Cerniauskas, the newly-released AI assistant helps to level the playing field. Previously, the barrier to entry for smaller companies was tough because they needed to hire a team of web scraping professionals that are hard to find on the market. By saving development hours spent on repetitive web scraping tasks, businesses can devote more attention to data quality, analysis, and innovation.

Get the latest news

delivered to your inbox Sign up for The Manila Times newsletters By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy.

Moreover, using Oxylabs' unified web scraping platform, they don't need to maintain costly infrastructure, such as servers - a challenge that 61% of professionals identify as the most pressing when collecting data on a large scale.

"We always aspired to be industry leaders through innovation, and taking a step further in the field of AI was a natural business decision for us. OxyCopilot is unique in its design, and we are currently patenting technological implementations behind it. Most importantly, it functions as a part of a broader platform that includes other AI-powered solutions, from proxy management to web unblocking. With the help of AI machine learning (ML), we are moving towards automating the entire public web data collection process", adds Cerniauskas.

Advertisement

Oxylabs holds over 100 patents globally, part of them covering technologies that have AI and ML implementations. In 2023, the company obtained ISO/IEC 27001:2017 standard for excellence in information security management.

You can read more about building the industry's first web scraping AI copilot here.

About Oxylabs Advertisement

Established in 2015, Oxylabs is a web intelligence platform and premium proxy provider, enabling companies of all sizes to utilize the power of big data. Constant innovation, an extensive patent portfolio, and a focus on ethics have allowed Oxylabs to become a global leader in the web intelligence collection industry and forge close ties with dozens of Fortune Global 500 companies. In 2022, 2023 and 2024, Oxylabs was named Europe's fastest-growing web intelligence acquisition company in the Financial Times' FT 1000 list. For more information, please visit: https://oxylabs.io/

Media Contact:

Vytautas Kirjazovas

[email protected]

Advertisement

Disclaimer: This content is provided by the Oxylabs. The statements, views, and opinions expressed in this column are solely those of the content provider. The information shared in this press release is not a solicitation for investment, nor is it intended as investment, financial, or trading advice. It is strongly recommended that you conduct thorough research and consult with a professional financial advisor before making any investment or trading decisions. Please conduct your own research and invest at your own risk.

Advertisement

Read more

View original article

September 26, 2024 10:26:29PM

The Manila times

Oxylabs introduces AI copilot for web data collection - Global Security Mag Online

Oxylabs announced the launch of a new AI-driven solution, OxyCopilot, that is designed to help its users save time and money spent on complex web data acquisition tasks. The industry's first artificial intelligence (AI) assistant for scraping is a part of Oxylabs' Web Scraper API, an all-in-one public web data collection platform.

Built on a combination of AI and proprietary Oxylabs' technology, OxyCopilot will enable both web scraping professionals and those who have little experience in web data collection to generate instant parsing instructions and requests for the Web Scraper API using nothing more than a URL and natural language prompts.

According to Cerniauskas, the newly-released AI assistant helps to level the playing field. Previously, the barrier to entry for smaller companies was tough because they needed to hire a team of web scraping professionals that are hard to find on the market. By saving development hours spent on repetitive web scraping tasks, businesses can devote more attention to data quality, analysis, and innovation.

Moreover, using Oxylabs' unified web scraping platform, they don't need to maintain costly infrastructure, such as servers -- a challenge that 61% of professionals identify as the most pressing when collecting data on a large scale.

Oxylabs holds over 100 patents globally, part of them covering technologies that have AI and ML implementations. In 2023, the company obtained ISO/IEC 27001:2017 standard for excellence in information security management.

Read more

View original article

September 26, 2024 10:36:27AM

Global Security Mag Online

18 days ago

Luka

DatabasesEnterprise Information IntegrationData AdministrationTechnology

The Tie Launches Polkadot Ecosystem Dashboard on The Tie Terminal - Decrypt

Bill Laboon, Director of Education at Web3 Foundation, on Polkadot's 1000 Referendums, Centralized Points in Decentralized Polkadot, and Sprinkling Blockchain Dust | Ep. 369

September 13, 2024 6:41:08PM

cryptonews.com

Cryptonews Podcast host Matt Zahab sat down for an exclusive interview with Bill Laboon, the Director of Education and Governance Initiatives at the Web3 Foundation, a not-for-profit entity supporting the Polkadot ecosystem.

Laboon discussed everything Polkadot and various aspects of its ecosystem, the real competitors to Polkadot and the Web3 Foundation, and how the Foundation and Parity turned out to be centralized points in a decentralized project.

He talked about the community reaching a whopping 1,000 referendums and crypto needing more end-user-focused apps if it's to go mainstream.

Will the Real Competitor Please Stand Up

Web3 Foundation is a not-for-profit company, meaning it doesn't aim to make money. It has funds that the team uses for day-to-day operating expenses.

Instead, the Foundation was founded to bring about the decentralized internet, a world where people own their own data, Laboon told Matt.

One of the many ways they're accomplishing this is by supporting the Polkadot ecosystem.

To help shepherd Polkadot, the Foundation provides education, outreach, and grants. It also does coordination and interacts with the community.

These are not Polkadot's competitors, Laboon said. "We're working towards the same goal in different ways, and we do a lot of interoperability with them."

Actually, argued the director, "our competitor really is the Web2 world. We have different ways of going about our way of winning."

Moreover, while Web3 Foundation focuses on Polkadot, it always had a "broader mandate."

Therefore, they have provided funding to people building other projects and researching other ecosystems, which doesn't directly benefit Polkadot but is more generalized.

Anyone can issue a referendum. The large community of DOT holders makes all decisions by voting.

The ecosystem is "entirely decentralized," Laboon said. "The control is entirely in the hands of the DOT holders," including any upgrades, updates, or spending from the treasury.

Polkadot has its OpenGov governance system where one DOT equals one vote.

"It's just a very direct democracy. And this has been very challenging," Laboon remarked.

Importantly, there is no dependency on developers to implement referendums.

Instead, users issue referendums in code, which will execute following a successful vote - and "can actually change the rules of the blockchain, which I think is really neat."

"There's no way to stop it," Laboon added.

Web3 Foundation does look through referenda, double-checking if something seems malicious.

They interact with governance teams and answer people's questions, but they prefer not to share opinions unless something is dangerous.

Furthermore, "we've figured out what the community does and what they're interested in versus what's actually good for the long-term health of the network," the director noted. "It's been a very interesting experience in democracy."

Even if the Foundation expresses a positive opinion on a referendum, the community can vote it down. They recently did just that, arguing that an upgrade should run longer on the testnet.

This allowed more time for the wallets to understand the upgrade, given that Polkadot is not a single blockchain but a system of 51 blockchains (aka parachains).

"Three weeks later, [the referendum] was reissued, and the network was upgraded," Laboon said.

Sprinkling a Bit of Blockchain Dust Doesn't Make It a Blockchain App

The crypto market needs useful and user-facing apps, Laboon argued.

We have plenty of infrastructure in the space and many people "building really cool tech for the developers."

"What we don't always have is people that are building things that are useful for the end user on top of that stuff that the nerds are building," the director pointed out.

And this is necessary for the process of adoption.

Developers often promise "smart contracts and immutability," creating "unstoppable" applications, but then they go and code their smart contracts to actually be stoppable, argued Laboon.

There is a lot of dependence on the teams building projects. This makes the project vulnerable and is generally an issue.

This vulnerability is not only in the form of devs abandoning a project; it could also be something much more malicious.

Therefore, projects need teams that will ensure they're "actually creating unstoppable applications and not something that somebody sprinkled a little blockchain dust on and said, okay, well, now it's a blockchain application."

As explained above, Polkadot is unstoppable. Even if Parity, the main engineering team behind Polkadot, were to just up and leave, it would not be the project's end. Polkadot would be able to find another team, Laboon claimed.

Centralized Points in the Decentralized Ecosystem

Polkadot established the Decentralized Futures grant program last year, allocating $20 million and 5 million DOT to kickstart economically independent, active participants in the Polkadot ecosystem.

"The basic outline," Laboon said, is "we give them cake, they build good products."

Last year, the team came to a major realization. Web3 Foundation and Parity were actually "centralized points in the ecosystem we wanted to decentralize."

Centralized entities within a system, no matter how far they distance themselves from decision-making, still maintain "soft power in the ecosystem."

Therefore, to turn themselves into decentralized elements, they established various teams.

These include marketing, business development, education, documentation, investments, and more.

These are small teams with great products, which needed funding to start. When receiving the grant from the program, they agree to several milestones, getting a portion of the grant with each milestone reached.

"So far, it's been great. We've had some really good successes coming out of that," Laboon commented.

They had 250 applications and gave out exactly 20% (50 grants).

The Decentralized Futures program is now over, but "we may have another one in the future," he concluded.

Laboon is the author of two books: 'A Friendly Introduction to Software Testing,' an undergraduate textbook, and 'Strength in Numbers: A Novel of Cryptocurrency,' a near-future novel set in a world where cryptocurrency has eliminated traditional money.

Read more

View original article

September 13, 2024 6:41:08PM

cryptonews.com

Similar articles from other sources

Show(13)

Weekly Digest: Polkadot sees growth in funding, analytics and engagement By Investing.com

http://Investing.com - This week has been buzzing with activity for blockchain companies and platforms within the Polkadot ecosystem, with new funding rounds, fresh analytics tools, and a spike in developer engagement.

Here's a recap of the top stories that made headlines in this week's digest.

Hyperbridge raises $2.5 million in funding

Blockchain company Hyperbridge has bagged $2.5 million in seed funding from the Web3 Foundation and Scytale Digital to ramp up its use of Polkadot's architecture for decentralized applications and scalability.

The company also won a parachain slot on Polkadot, raising an additional $2.7 million through a crowd loan, which became the network's most successful campaign of its type.

Hyperbridge develops cross-chain interoperability through a hub model, incorporating zero-knowledge technology for secure cross-chain messaging and storage.

The Tie launches analytics dashboard for Polkadot

The Tie, a market data and digital assets analytics provider, has launched a Polkadot Ecosystem Dashboard for institutional clients.

The new dashboard provides a suite of analytics on Polkadot-based assets, allowing users to monitor network performance and explore projects within the ecosystem. The platform helps investors and traders make informed decisions by providing a holistic view of the Polkadot network's dynamics.

Dune expands data coverage to all Polkadot parachains

Blockchain analytics platform Dune has expanded its data coverage to include over 50 parachains in the Polkadot ecosystem.

The integration offers real-time insights into all onchain activities, enabling users to analyze transactions, decentralized finance (DeFi) activity, gaming developments, and NFTs. Dune's expanded coverage includes major parachains such as Moonbeam, Acala, Phala, and Mythos.

Kampela secures DAO funding on Polkadot

Crypto hardware startup Kampela has become the first company to secure full funding through a Decentralized Autonomous Organization (DAO) on the Polkadot network.

The funding supports Kampela's hardware wallet, which uses NFC technology and does not require wired charging or a battery.

EasyA tops one million downloads, boosts Polkadot engagement

Web3 education app EasyA has surpassed one million downloads on iOS and Android, with over 100,000 developers learning about Polkadot through the platform.

Founded in 2020 by Phil and Dom Kwok, EasyA has quickly become a key resource for both new and experienced developers, helping to drive increased activity on the Polkadot network.

Read more

View original article

September 12, 2024 8:28:05PM

Investing.com India

Weekly Digest: Polkadot sees growth in funding, analytics and engagement By Investing.com

http://Investing.com - This week has been buzzing with activity for blockchain companies and platforms within the Polkadot ecosystem, with new funding rounds, fresh analytics tools, and a spike in developer engagement.

Here's a recap of the top stories that made headlines in this week's digest.

Blockchain company Hyperbridge has bagged $2.5 million in seed funding from the Web3 Foundation and Scytale Digital to ramp up its use of Polkadot's architecture for decentralized applications and scalability.

The company also won a parachain slot on Polkadot, raising an additional $2.7 million through a crowd loan, which became the network's most successful campaign of its type.

Hyperbridge develops cross-chain interoperability through a hub model, incorporating zero-knowledge technology for secure cross-chain messaging and storage.

The Tie, a market data and digital assets analytics provider, has launched a Polkadot Ecosystem Dashboard for institutional clients.

The new dashboard provides a suite of analytics on Polkadot-based assets, allowing users to monitor network performance and explore projects within the ecosystem. The platform helps investors and traders make informed decisions by providing a holistic view of the Polkadot network's dynamics.

Blockchain analytics platform Dune has expanded its data coverage to include over 50 parachains in the Polkadot ecosystem.

The integration offers real-time insights into all onchain activities, enabling users to analyze transactions, decentralized finance (DeFi) activity, gaming developments, and NFTs. Dune's expanded coverage includes major parachains such as Moonbeam, Acala, Phala, and Mythos.

Crypto hardware startup Kampela has become the first company to secure full funding through a Decentralized Autonomous Organization (DAO) on the Polkadot network.

The funding supports Kampela's hardware wallet, which uses NFC technology and does not require wired charging or a battery.

Web3 education app EasyA has surpassed one million downloads on iOS and Android, with over 100,000 developers learning about Polkadot through the platform.

Founded in 2020 by Phil and Dom Kwok, EasyA has quickly become a key resource for both new and experienced developers, helping to drive increased activity on the Polkadot network.

Read more

View original article

September 12, 2024 8:31:50PM

Investing.com UK

Kampela Secures Polkadot Network Investment, Becomes First Fully DAO-Funded Hardware Wallet

Kampela, an innovative hardware startup, has successfully secured funding through a Decentralized Autonomous Organization (DAO) on the Polkadot network, marking a significant milestone as the first hardware project to be fully backed by a DAO in the Polkadot ecosystem. This achievement demonstrates the power of decentralized governance in supporting tangible technological innovations and opens new possibilities for hardware development in the blockchain space.

Kampela's device leverages NFC technology and operates without the need for wired charging or a battery, showcasing a novel approach to secure, user-centric hardware wallets. The company's innovative strategy consolidates both the design and production processes within a single integrated platform, enabling the delivery of highly differentiated solutions to the market.

Key Highlights:

- Funding secured through Polkadot Referenda #62, #370, and #886, totalling approximately 253,000 DOTs (around $1 million USD at current DOT price)

- Transparent, community-driven decision-making process

- Establishes a new model for supporting hardware innovation in the blockchain industry

- Battery-free operation powered exclusively by NFC during active use

- Secure NFC-only connectivity, eliminating vulnerable USB interfaces

Kirill, ex-CISO of Parity and Kampela's Co-Founder, stated, "Not only is Kampela one of the very few fully open hardware wallets on the markets (you can get everything, up to the precise machining instructions for the casing, from our GitHub), but the project is also a brilliant example of the power of DAOs in the new, blockchain-enabled economy: the project was funded via the Polkadot Treasury (one of the biggest DAOs in the world right now), and it is thanks to this funding that our designs are so uncompromisingly open, free, and democratic."

The successful funding of Kampela through a DAO on the Polkadot network showcases how community-driven funding can fuel advancements in secure, user-centric devices. This achievement is expected to attract attention from blockchain enthusiasts, hardware innovators, investors interested in decentralized funding models, and Polkadot community members and stakeholders.

Kampela's hardware design features several enhanced security measures, including battery-free operation and secure NFC-only connectivity. The device remains completely powered down when not in use, deterring physical access or tampering, and its NFC-only communication provides an additional security layer compared to traditional USB interfaces.

With six hardware revisions conducted and 54 components in its design, Kampela represents a significant leap forward in blockchain hardware innovation. As the first fully DAO-funded hardware project in the Polkadot ecosystem, Kampela sets a new precedent for how hardware startups can leverage decentralized funding and governance to bring cutting-edge products to market.

For more information about Kampela and its innovative hardware wallet, readers can visit https://www.kampe.la/ or https://x.com/kampela_signer.

About Kampela

Kampela, a Finnish startup, is revolutionizing secure transaction management with its innovative NFC-powered hardware device. Operating without Bluetooth, USB, or batteries, it features an e-ink display and open-source design. Tailored for blockchain applications and backed by the Web3 Foundation and Polkadot Treasury, Kampela offers a sustainable, secure solution for digital interactions in decentralized systems.

Disclaimer: This is a sponsored press release and is for informational purposes only. It does not reflect the views of Crypto Daily, nor is it intended to be used as legal, tax, investment, or financial advice.

Read more

View original article

September 12, 2024 4:31:48PM

cryptodaily.co.uk

Kampela Secures Polkadot Network Investment, Becomes First Fully DAO-Funded Hardware Wallet - Decrypt

Kampela, an innovative hardware startup, has successfully secured funding through a Decentralized Autonomous Organization (DAO) on the Polkadot network, marking a significant milestone as the first hardware project to be fully backed by a DAO in the Polkadot ecosystem. This achievement demonstrates the power of decentralized governance in supporting tangible technological innovations and opens new possibilities for hardware development in the blockchain space.

Kampela's device leverages NFC technology and operates without the need for wired charging or a battery, showcasing a novel approach to secure, user-centric hardware wallets. The company's innovative strategy consolidates both the design and production processes within a single integrated platform, enabling the delivery of highly differentiated solutions to the market.

Key Highlights:

- Funding secured through Polkadot Referenda #62, #370, and #886, totalling approximately 253,000 DOTs (around $1 million USD at current DOT price)

- Transparent, community-driven decision-making process

- Establishes a new model for supporting hardware innovation in the blockchain industry

- Battery-free operation powered exclusively by NFC during active use

- Secure NFC-only connectivity, eliminating vulnerable USB interfaces

The successful funding of Kampela through a DAO on the Polkadot network showcases how community-driven funding can fuel advancements in secure, user-centric devices. This achievement is expected to attract attention from blockchain enthusiasts, hardware innovators, investors interested in decentralized funding models, and Polkadot community members and stakeholders.

Kampela's hardware design features several enhanced security measures, including battery-free operation and secure NFC-only connectivity. The device remains completely powered down when not in use, deterring physical access or tampering, and its NFC-only communication provides an additional security layer compared to traditional USB interfaces.

With six hardware revisions conducted and 54 components in its design, Kampela represents a significant leap forward in blockchain hardware innovation. As the first fully DAO-funded hardware project in the Polkadot ecosystem, Kampela sets a new precedent for how hardware startups can leverage decentralized funding and governance to bring cutting-edge products to market.

For more information about Kampela and its innovative hardware wallet, readers can visit https://www.kampe.la/ or https://x.com/kampela_signer.

About Kampela

Kampela, a Finnish startup, is revolutionizing secure transaction management with its innovative NFC-powered hardware device. Operating without Bluetooth, USB, or batteries, it features an e-ink display and open-source design. Tailored for blockchain applications and backed by the Web3 Foundation and Polkadot Treasury, Kampela offers a sustainable, secure solution for digital interactions in decentralized systems.

Disclaimer: Press release sponsored by our commercial partners.

Read more

View original article

September 12, 2024 5:19:54PM

Decrypt

The Tie Launches Polkadot Ecosystem Dashboard on The Tie Terminal

New Dashboard Provides Institutional Clients with In-Depth Insights into the Polkadot Ecosystem

The Tie, a leading provider of market data, news, and analytics for digital assets, launches the Polkadot Ecosystem Dashboard enabling institutional clients to access a comprehensive suite of analytics for Polkadot-based assets.

The Polkadot Ecosystem Dashboard offers a deep dive into essential metrics and insights, enabling users to understand key aspects of the network's performance and usage. By accessing Polkadot blockchain data via The Tie Terminal, investors and traders can make more informed decisions, enhance their understanding of assets and projects within the ecosystem, and gain valuable insights into one of the most dynamic ecosystems in the digital asset space.

"We're excited to launch the Polkadot Ecosystem Dashboard, providing our institutional clients with powerful tools to analyze and understand the rapidly evolving Polkadot ecosystem," said Joshua Frank, Co-Founder & CEO of The Tie. "This addition to The Tie Terminal reflects our commitment to delivering the most comprehensive insights, enabling informed decision-making in institutional crypto."

The dashboard includes account and native token metrics, allowing users to track the growth of the ecosystem by monitoring new, active, and cumulative account metrics over time for each parachain. This feature provides a clear view of user adoption and engagement, while detailed data on the total, staked, and circulating supply units of parachain tokens reflect community growth and interest. Additionally, information on the number of native token holders and price trends across the ecosystem offers further context for analyzing network performance.

Users can also access an overview of transaction metrics, including the volume of native tokens transferred within parachains over time. Transactions are categorized into signed and unsigned types, providing insights into network activity and user behavior. The dashboard tracks transaction fees in both native tokens and USD and displays transaction counts for parachains with Ethereum Virtual Machine (EVM) compatibility, such as Moonbeam and Astar. Additionally, the dashboard monitors Cross-Consensus Messaging (XCM) transactions, offering a comprehensive understanding of cross-chain interactions and interoperability within the Polkadot network.

The Total Value Locked (TVL) metrics provide a detailed analysis of the value locked across parachains, updated every 12 hours. This includes categories such as staking, governance tokens, and liquid staking. A treemap component offers a visual representation of the latest TVL metrics, presenting a high-level perspective of the ecosystem's value and activity.

The dashboard also features the latest XCM activity metrics, including data on XCM transfers, messages, open channels, and connected parachains, providing a thorough overview of Polkadot's cross-chain messaging dynamics. The screener function allows users to track ecosystem coins and events, enabling them to filter the dashboard according to specific coins, watchlists, or sectors. This feature ensures that institutional clients remain updated on critical events within the Polkadot ecosystem, helping them stay ahead of significant developments.

The Polkadot Ecosystem Dashboard is designed to give users a holistic view of the network's performance, making it easier to track developments, analyze trends, and explore the diverse projects within the Polkadot ecosystem. With this addition, The Tie Terminal continues to be an essential tool for institutional clients seeking comprehensive and actionable data in the digital asset space.

The Polkadot Ecosystem Dashboard is now available under the "Presets" section in the Dashboard Selection screen on The Tie Terminal. For more information, readers can visit The Tie's website.

About The Tie

The Tie is the leading provider of information services for digital assets operating across three core verticals: Institutional, Data Redistribution, and Corporate Access. On the Institutional side, The Tie's core offering - The Tie Terminal is the fastest and most comprehensive workstation for institutional digital asset investors. The Tie's institutional clients include hundreds of the leading traditional and crypto-native hedge funds, VCs, market makers, asset managers, banks, and other institutional market participants.

The Tie's Redistribution business syndicates data feeds to dozens of leading platforms including FalconX, BitMEX, Real Vision, Broadridge, and Cointelegraph. The Tie's corporate access business provides direct connectivity between institutions and token issuers through a series of industry leading conferences and events - including our flagship event, The Bridge hosted in 2023 with The New York Stock Exchange.

About Polkadot

Polkadot is the powerful, secure core of Web3, providing a shared foundation that unites some of the world's most transformative apps and blockchains. Polkadot offers advanced modular architecture that allows devs to easily design and build their own specialized blockchain projects, pooled security that ensures the same high standard for secure block production across all connected chains and apps connected to it, and robust governance that ensures a transparent system where everyone has say in shaping the blockchain ecosystem for growth and sustainability. With Polkadot, users are not just a participant, they are a co-creator with the power to shape its future.

Jonathan Duran

jonathan@http://distractive.xyz

Read more

View original article

September 12, 2024 3:23:37PM

Cryptopolitan

The Tie Launches Polkadot Ecosystem Dashboard on The Tie Terminal

New Dashboard Provides Institutional Clients with In-Depth Insights into the Polkadot Ecosystem

The Tie, a leading provider of market data, news, and analytics for digital assets, launches the Polkadot Ecosystem Dashboard enabling institutional clients to access a comprehensive suite of analytics for Polkadot-based assets.

The Polkadot Ecosystem Dashboard offers a deep dive into essential metrics and insights, enabling users to understand key aspects of the network's performance and usage. By accessing Polkadot blockchain data via The Tie Terminal, investors and traders can make more informed decisions, enhance their understanding of assets and projects within the ecosystem, and gain valuable insights into one of the most dynamic ecosystems in the digital asset space.

"We're excited to launch the Polkadot Ecosystem Dashboard, providing our institutional clients with powerful tools to analyze and understand the rapidly evolving Polkadot ecosystem," said Joshua Frank, Co-Founder & CEO of The Tie. "This addition to The Tie Terminal reflects our commitment to delivering the most comprehensive insights, enabling informed decision-making in institutional crypto."

The dashboard includes account and native token metrics, allowing users to track the growth of the ecosystem by monitoring new, active, and cumulative account metrics over time for each parachain. This feature provides a clear view of user adoption and engagement, while detailed data on the total, staked, and circulating supply units of parachain tokens reflect community growth and interest. Additionally, information on the number of native token holders and price trends across the ecosystem offers further context for analyzing network performance.

Users can also access an overview of transaction metrics, including the volume of native tokens transferred within parachains over time. Transactions are categorized into signed and unsigned types, providing insights into network activity and user behavior. The dashboard tracks transaction fees in both native tokens and USD and displays transaction counts for parachains with Ethereum Virtual Machine (EVM) compatibility, such as Moonbeam and Astar. Additionally, the dashboard monitors Cross-Consensus Messaging (XCM) transactions, offering a comprehensive understanding of cross-chain interactions and interoperability within the Polkadot network.

The Total Value Locked (TVL) metrics provide a detailed analysis of the value locked across parachains, updated every 12 hours. This includes categories such as staking, governance tokens, and liquid staking. A treemap component offers a visual representation of the latest TVL metrics, presenting a high-level perspective of the ecosystem's value and activity.

The dashboard also features the latest XCM activity metrics, including data on XCM transfers, messages, open channels, and connected parachains, providing a thorough overview of Polkadot's cross-chain messaging dynamics. The screener function allows users to track ecosystem coins and events, enabling them to filter the dashboard according to specific coins, watchlists, or sectors. This feature ensures that institutional clients remain updated on critical events within the Polkadot ecosystem, helping them stay ahead of significant developments.

The Polkadot Ecosystem Dashboard is designed to give users a holistic view of the network's performance, making it easier to track developments, analyze trends, and explore the diverse projects within the Polkadot ecosystem. With this addition, The Tie Terminal continues to be an essential tool for institutional clients seeking comprehensive and actionable data in the digital asset space.

The Polkadot Ecosystem Dashboard is now available under the "Presets" section in the Dashboard Selection screen on The Tie Terminal. For more information, readers can visit The Tie's website.

About The Tie

The Tie is the leading provider of information services for digital assets operating across three core verticals: Institutional, Data Redistribution, and Corporate Access. On the Institutional side, The Tie's core offering - The Tie Terminal is the fastest and most comprehensive workstation for institutional digital asset investors. The Tie's institutional clients include hundreds of the leading traditional and crypto-native hedge funds, VCs, market makers, asset managers, banks, and other institutional market participants.

The Tie's Redistribution business syndicates data feeds to dozens of leading platforms including FalconX, BitMEX, Real Vision, Broadridge, and Cointelegraph. The Tie's corporate access business provides direct connectivity between institutions and token issuers through a series of industry leading conferences and events - including our flagship event, The Bridge hosted in 2023 with The New York Stock Exchange.

About Polkadot

Polkadot is the powerful, secure core of Web3, providing a shared foundation that unites some of the world's most transformative apps and blockchains. Polkadot offers advanced modular architecture that allows devs to easily design and build their own specialized blockchain projects, pooled security that ensures the same high standard for secure block production across all connected chains and apps connected to it, and robust governance that ensures a transparent system where everyone has say in shaping the blockchain ecosystem for growth and sustainability. With Polkadot, users are not just a participant, they are a co-creator with the power to shape its future.

Disclaimer: This is a sponsored press release and is for informational purposes only. It does not reflect the views of Crypto Daily, nor is it intended to be used as legal, tax, investment, or financial advice.

Read more

View original article

September 12, 2024 3:23:14PM

cryptodaily.co.uk

Kampela Secures Polkadot Network Investment, Becomes First Fully DAO-Funded Hardware Wallet - Blockonomi

Kampela, an innovative hardware startup, has successfully secured funding through a Decentralized Autonomous Organization (DAO) on the Polkadot network, marking a significant milestone as the first hardware project to be fully backed by a DAO in the Polkadot ecosystem. This achievement demonstrates the power of decentralized governance in supporting tangible technological innovations and opens new possibilities for hardware development in the blockchain space.

Kampela's device leverages NFC technology and operates without the need for wired charging or a battery, showcasing a novel approach to secure, user-centric hardware wallets. The company's innovative strategy consolidates both the design and production processes within a single integrated platform, enabling the delivery of highly differentiated solutions to the market.

Key Highlights:

- Funding secured through Polkadot Referenda #62, #370, and #886, totalling approximately 253,000 DOTs (around $1 million USD at current DOT price)

- Transparent, community-driven decision-making process

- Establishes a new model for supporting hardware innovation in the blockchain industry

- Battery-free operation powered exclusively by NFC during active use

- Secure NFC-only connectivity, eliminating vulnerable USB interfaces

Kirill, ex-CISO of Parity and Kampela's Co-Founder, stated, "Not only is Kampela one of the very few fully open hardware wallets on the markets (you can get everything, up to the precise machining instructions for the casing, from our GitHub), but the project is also a brilliant example of the power of DAOs in the new, blockchain-enabled economy: the project was funded via the Polkadot Treasury (one of the biggest DAOs in the world right now), and it is thanks to this funding that our designs are so uncompromisingly open, free, and democratic."

The successful funding of Kampela through a DAO on the Polkadot network showcases how community-driven funding can fuel advancements in secure, user-centric devices. This achievement is expected to attract attention from blockchain enthusiasts, hardware innovators, investors interested in decentralized funding models, and Polkadot community members and stakeholders.

Kampela's hardware design features several enhanced security measures, including battery-free operation and secure NFC-only connectivity. The device remains completely powered down when not in use, deterring physical access or tampering, and its NFC-only communication provides an additional security layer compared to traditional USB interfaces.

With six hardware revisions conducted and 54 components in its design, Kampela represents a significant leap forward in blockchain hardware innovation. As the first fully DAO-funded hardware project in the Polkadot ecosystem, Kampela sets a new precedent for how hardware startups can leverage decentralized funding and governance to bring cutting-edge products to market.

For more information about Kampela and its innovative hardware wallet, readers can visit https://www.kampe.la/ or https://x.com/kampela_signer.

About Kampela

Kampela, a Finnish startup, is revolutionizing secure transaction management with its innovative NFC-powered hardware device. Operating without Bluetooth, USB, or batteries, it features an e-ink display and open-source design. Tailored for blockchain applications and backed by the Web3 Foundation and Polkadot Treasury, Kampela offers a sustainable, secure solution for digital interactions in decentralized systems.

Read more

View original article

September 12, 2024 4:25:44PM

Blockonomi

The Tie Launches Polkadot Ecosystem Dashboard on The Tie Terminal - Decrypt

New Dashboard Provides Institutional Clients with In-Depth Insights into the Polkadot Ecosystem

The Tie, a leading provider of market data, news, and analytics for digital assets, launches the Polkadot Ecosystem Dashboard enabling institutional clients to access a comprehensive suite of analytics for Polkadot-based assets.

The Polkadot Ecosystem Dashboard offers a deep dive into essential metrics and insights, enabling users to understand key aspects of the network's performance and usage. By accessing Polkadot blockchain data via The Tie Terminal, investors and traders can make more informed decisions, enhance their understanding of assets and projects within the ecosystem, and gain valuable insights into one of the most dynamic ecosystems in the digital asset space.

The dashboard includes account and native token metrics, allowing users to track the growth of the ecosystem by monitoring new, active, and cumulative account metrics over time for each parachain. This feature provides a clear view of user adoption and engagement, while detailed data on the total, staked, and circulating supply units of parachain tokens reflect community growth and interest. Additionally, information on the number of native token holders and price trends across the ecosystem offers further context for analyzing network performance.

Users can also access an overview of transaction metrics, including the volume of native tokens transferred within parachains over time. Transactions are categorized into signed and unsigned types, providing insights into network activity and user behavior. The dashboard tracks transaction fees in both native tokens and USD and displays transaction counts for parachains with Ethereum Virtual Machine (EVM) compatibility, such as Moonbeam and Astar. Additionally, the dashboard monitors Cross-Consensus Messaging (XCM) transactions, offering a comprehensive understanding of cross-chain interactions and interoperability within the Polkadot network.

The Total Value Locked (TVL) metrics provide a detailed analysis of the value locked across parachains, updated every 12 hours. This includes categories such as staking, governance tokens, and liquid staking. A treemap component offers a visual representation of the latest TVL metrics, presenting a high-level perspective of the ecosystem's value and activity.

The dashboard also features the latest XCM activity metrics, including data on XCM transfers, messages, open channels, and connected parachains, providing a thorough overview of Polkadot's cross-chain messaging dynamics. The screener function allows users to track ecosystem coins and events, enabling them to filter the dashboard according to specific coins, watchlists, or sectors. This feature ensures that institutional clients remain updated on critical events within the Polkadot ecosystem, helping them stay ahead of significant developments.

The Polkadot Ecosystem Dashboard is designed to give users a holistic view of the network's performance, making it easier to track developments, analyze trends, and explore the diverse projects within the Polkadot ecosystem. With this addition, The Tie Terminal continues to be an essential tool for institutional clients seeking comprehensive and actionable data in the digital asset space.

The Polkadot Ecosystem Dashboard is now available under the "Presets" section in the Dashboard Selection screen on The Tie Terminal. For more information, readers can visit The Tie's website.

About The Tie

The Tie is the leading provider of information services for digital assets operating across three core verticals: Institutional, Data Redistribution, and Corporate Access. On the Institutional side, The Tie's core offering - The Tie Terminal is the fastest and most comprehensive workstation for institutional digital asset investors. The Tie's institutional clients include hundreds of the leading traditional and crypto-native hedge funds, VCs, market makers, asset managers, banks, and other institutional market participants.

The Tie's Redistribution business syndicates data feeds to dozens of leading platforms including FalconX, BitMEX, Real Vision, Broadridge, and Cointelegraph. The Tie's corporate access business provides direct connectivity between institutions and token issuers through a series of industry leading conferences and events - including our flagship event, The Bridge hosted in 2023 with The New York Stock Exchange.

About Polkadot

Polkadot is the powerful, secure core of Web3, providing a shared foundation that unites some of the world's most transformative apps and blockchains. Polkadot offers advanced modular architecture that allows devs to easily design and build their own specialized blockchain projects, pooled security that ensures the same high standard for secure block production across all connected chains and apps connected to it, and robust governance that ensures a transparent system where everyone has say in shaping the blockchain ecosystem for growth and sustainability. With Polkadot, users are not just a participant, they are a co-creator with the power to shape its future.

Jonathan Duran

jonathan@http://distractive.xyz

Disclaimer: Press release sponsored by our commercial partners.

Read more

View original article

September 12, 2024 3:30:00PM

Decrypt

Dune Becomes the Most Comprehensive Onchain Data Hub for Polkadot's 50+ Parachains

New Integration Expands Dune's Coverage Across the Entire Polkadot Ecosystem, Delivering Unmatched Onchain Analytics

Zurg, Switzerland, September 11th 2024 - Dune, the leading platform for onchain analytics, announces the integration of 50+ parachains from the Polkadot ecosystem. With this expansion, Dune solidifies its position as the most comprehensive data hub for Polkadot, offering unparalleled insights and analytics for developers, investors, and data enthusiasts alike.

Earlier this year, Dune launched support for Polkadot, Kusama, and six parachains. Now, the platform takes a major step forward by expanding coverage to include the entire Polkadot ecosystem. This integration enables users to explore, analyze, and visualize all onchain activities across Polkadot in real time, making Dune the go-to destination for data-driven decision-making.

Polkadot's ecosystem, known for its diverse and innovative parachains, generates a vast amount of data. Navigating this complex network can be a challenge. Dune's expanded support simplifies access to crucial onchain data, empowering users to gain deeper insights into Polkadot's dynamic ecosystem.

The newly integrated parachains include Moonbeam, which specializes in smart contracts and cross-chain DeFi; Acala, known as Polkadot's hub for decentralized finance; Phala, which focuses on privacy-first DePIN and AI solutions; and Mythos, a protocol bringing AAA decentralized gaming and hugely popular franchises to Polkadot. These integrations, along with dozens of others, position Dune as the most expansive source for Polkadot's onchain analytics, allowing users to track transaction flows, analyze DeFi activity, and monitor developments in gaming and NFTs -- all within a single, comprehensive platform.

FredrikHaga, CEO of Dune shared, "Polkadot and its Substrate-based chains form a vast and complex ecosystem. With this integration of 50+ parachains, our goal is to make that complexity easier to navigate. We want to give people a clear, accessible view of what's happening across the network, so they can focus on innovation and building with confidence."

This milestone integration was achieved through Dune's partnership with Colorful Notion. Together, the teams developed a streamlined process for integrating new parachains, ensuring Dune's data remains comprehensive, accurate, and reliable.

Dune's integration also includes enhanced functionality through the Dune API, allowing users to convert any query into a flexible API endpoint. This feature offers greater flexibility for developers and analysts, enabling them to seamlessly incorporate Dune's data into their own applications.

To learn more about the latest Polkadot integrations, readers can visit Dune's Polkadot Analytics.

About Dune

Dune is a leading data analytics platform that democratizes access to onchain data by enabling users to query, visualize, and share insights across various blockchains. With over 700,000 community-contributed data tables, Dune supports comprehensive analysis of tokens, wallets, protocols, and more. The platform's recent launch of the Dune API extends its capabilities for automated reporting, alerting, and integration into user applications.

Polkadot is the powerful, secure core of Web3, providing a shared foundation that unites some of the world's most transformative apps and blockchains. Polkadot offers advanced modular architecture that allows devs to easily design and build their own specialized blockchain projects, pooled security that ensures the same high standard for secure block production across all connected chains and apps connected to it, and robust governance that ensures a transparent system where everyone has say in shaping the blockchain ecosystem for growth and sustainability. With Polkadot, you're not just a participant, you're a co-creator with the power to shape its future.

Jonathan Duran

jonathan@http://distractive.xyz

Read more

View original article

September 11, 2024 3:07:45PM

Cryptopolitan

Dune Analytics builds a full data hub on Polkadot's 50 parachains

Polkadot is one of the most complex Web3 ecosystems, with challenges of bridging and fragmented liquidity.

Dune, the leading platform for on-chain analytics, will integrate data from more than 50 parachains on the Polkadot ecosystem. Dune will expand its oversight on Polkadot, with data tailored toward investors, developers, and researchers.

Dune Analytics will host data from more than 50 parachains on Polkadot. The new inflows will give an insight into Polkadot's entire ecosystem, where most projects remain invisible. Dune will offer data targeting investors, developers, and data analysts. The data dashboards will be built with input from the Colorful Notion team. Dune has set up a process for integrating new parachains to scale the data as Polkadot grows.

Colorful Notion is a Polkadot ecosystem team dealing with the Web3 hub's specialized data. Before the comprehensive partnership, Colorful Notion worked with individual parachains to build their Dune dashboards.

The Dune integration will grant users access to the Dune API, allowing them to convert queries into API endpoints. The dashboards will be usable to developers and analyst, to incorporate the data directly into their own applications.

Previously, Dune Analytics held data for Polkadot, Kusama, and six more prominent parachains. The new dashboards will expose data from Polkadot in real-time. Polkadot's parachain system contains projects with varied use cases, each producing a specific type of data. Parachains are mostly used for DeFi purposes, creating the need to track liquidity.

Polkadot also hosts bridge parachains, app chains, smart contract hubs, and others. Newly added parachains will include Moonbeam, which specializes in smart contracts and cross-chain DeFi; Acala, known as Polkadot's hub for decentralized finance; Phala, which focuses on privacy-first DePIN and AI solutions; and Mythos, a protocol bringing AAA decentralized gaming and hugely popular franchises to Polkadot.

Dune will present dashboards with transaction flows, DeFi analysis, gaming developments, and NFT activity.

"Polkadot and its Substrate-based chains form a vast and complex ecosystem. With this integration of 50+ parachains, our goal is to make that complexity easier to navigate. We want to give people a clear, accessible view of what's happening across the network, so they can focus on innovation and building with confidence,"

said CEO of Dune, Fredrik Haga.

So far, Dune hosts more than 700K community-created dashboards with curated selections for the best data sources. Recently, the Dune API release brought the data to life for third-party apps. The platform allows for automated reporting, alerts, and detailed tracking of specific wallets or transactions.

Dune has also turned into a tool to present lesser-known chains. So far, most of the Solana ecosystem and EVM-compatible chains have gained representation through community-generated dashboards and data panels. Polkadot will gain more curation on its parachains, expanding on available community data boards.

Polkadot aims to re-capture growth

Polkadot is a long-running ICO project, which became key for the Web3 ecosystem. The parachains model is one of the solutions for scaling. Parachains offer a way to cheaply launch apps, while having the same security as bigger chains. Polkadot itself aims to scale Ethereum, though not being directly EVM-compatible. The Polkadot hub still relies on bridging from Ethereum, though it has a relatively complex ecosystem. Three main bridges operate on Polkadot for specific asset and liquidity purposes.

Polkadot parachains are also in competition for influence in terms of value locked. The Dune dashboards will expose additional liquidity features and signs of activity. Currently, Moonbeam, Astar and Hydration have the biggest liquidity share.

Polkadot is also known for its large-scale marketing, keeping a high profile on social media through influencers. However, the project may turn away from overspending on publicity, instead returning to its technical roots.

DOT, the native token of Polkadot, currently trades at $4.12 after moving above $10 during the peak of the 2024 bull market. Despite the low token price, Polkadot continues to use its reserves for building, while remaining a prominent Web3 platform.

Polkadot parachains are also a specific hub of activity, requiring additional developer skills. Developers are also evaluating the capabilities of parachains versus EVM smart contracts, trying to decrease the complexity of Polkadot tools. The new Dune dashboards may give more insights of Polkadot's strengths and weaknesses. Interoperable chains are still facing limitations, with only $45M locked in http://21.co, a cross-chain liquidity app.

Read more

View original article

September 11, 2024 3:00:21PM

Cryptopolitan

Dune Becomes the Most Comprehensive Onchain Data Hub for Polkadot's 50+ Parachains

New Integration Expands Dune's Coverage Across the Entire Polkadot Ecosystem, Delivering Unmatched Onchain Analytics

Zurg, Switzerland, September 11th 2024 - Dune, the leading platform for onchain analytics, announces the integration of 50+ parachains from the Polkadot ecosystem. With this expansion, Dune solidifies its position as the most comprehensive data hub for Polkadot, offering unparalleled insights and analytics for developers, investors, and data enthusiasts alike.

Earlier this year, Dune launched support for Polkadot, Kusama, and six parachains. Now, the platform takes a major step forward by expanding coverage to include the entire Polkadot ecosystem. This integration enables users to explore, analyze, and visualize all onchain activities across Polkadot in real time, making Dune the go-to destination for data-driven decision-making.

Polkadot's ecosystem, known for its diverse and innovative parachains, generates a vast amount of data. Navigating this complex network can be a challenge. Dune's expanded support simplifies access to crucial onchain data, empowering users to gain deeper insights into Polkadot's dynamic ecosystem.

The newly integrated parachains include Moonbeam, which specializes in smart contracts and cross-chain DeFi; Acala, known as Polkadot's hub for decentralized finance; Phala, which focuses on privacy-first DePIN and AI solutions; and Mythos, a protocol bringing AAA decentralized gaming and hugely popular franchises to Polkadot. These integrations, along with dozens of others, position Dune as the most expansive source for Polkadot's onchain analytics, allowing users to track transaction flows, analyze DeFi activity, and monitor developments in gaming and NFTs -- all within a single, comprehensive platform.

FredrikHaga, CEO of Dune shared, "Polkadot and its Substrate-based chains form a vast and complex ecosystem. With this integration of 50+ parachains, our goal is to make that complexity easier to navigate. We want to give people a clear, accessible view of what's happening across the network, so they can focus on innovation and building with confidence."

This milestone integration was achieved through Dune's partnership with Colorful Notion. Together, the teams developed a streamlined process for integrating new parachains, ensuring Dune's data remains comprehensive, accurate, and reliable.

Dune's integration also includes enhanced functionality through the Dune API, allowing users to convert any query into a flexible API endpoint. This feature offers greater flexibility for developers and analysts, enabling them to seamlessly incorporate Dune's data into their own applications.

To learn more about the latest Polkadot integrations, readers can visit Dune's Polkadot Analytics.

About Dune

Dune is a leading data analytics platform that democratizes access to onchain data by enabling users to query, visualize, and share insights across various blockchains. With over 700,000 community-contributed data tables, Dune supports comprehensive analysis of tokens, wallets, protocols, and more. The platform's recent launch of the Dune API extends its capabilities for automated reporting, alerting, and integration into user applications.

Polkadot is the powerful, secure core of Web3, providing a shared foundation that unites some of the world's most transformative apps and blockchains. Polkadot offers advanced modular architecture that allows devs to easily design and build their own specialized blockchain projects, pooled security that ensures the same high standard for secure block production across all connected chains and apps connected to it, and robust governance that ensures a transparent system where everyone has say in shaping the blockchain ecosystem for growth and sustainability. With Polkadot, you're not just a participant, you're a co-creator with the power to shape its future.

Disclaimer: This is a sponsored press release and is for informational purposes only. It does not reflect the views of Crypto Daily, nor is it intended to be used as legal, tax, investment, or financial advice.

Read more

View original article

September 11, 2024 3:07:25PM

cryptodaily.co.uk

Dune expanding onchain analytics to cover 50+ Polkadot parachains

Dune, a platform specializing in onchain analytics, has integrated over 50 parachains from the Polkadot ecosystem, making it a comprehensive data hub for Polkadot's blockchain network.

This Polkadot (DOT) expansion will allow developers, investors, and data analysts to access and analyze real-time data from the Polkadot ecosystem, according to a press release shared with http://crypto.news.

In other words, Dune has added data from over 50 different blockchains, known as parachains, in the Polkadot network, making it easier for people to track and analyze activity in real time.

This new integration broadens its coverage, allowing users to explore onchain activities such as transaction flows, DeFi activity, and developments in gaming and NFTs.

Earlier in 2024, Dune began supporting Polkadot, Kusama (KSM), and six parachains. Key parachains added in this integration include Moonbeam, Acala, Phala, and Mythos, each contributing to various sectors within Polkadot, ranging from DeFi to gaming.

This integration was developed through a partnership with Colorful Notion, ensuring reliable and accurate data. Additionally, Dune offers enhanced functionality via its API, allowing users to create flexible endpoints for deeper analysis.

Last year, Dune launched DuneAI, enabling users to query crypto data through natural language, removing the need for SQL. Additionally, Dune introduced the Dune Data Hub, allowing seamless integration and contribution of datasets for enhanced data management.

Read more

View original article

September 11, 2024 4:02:12PM

crypto.news

Dune expands onchain data coverage to all Polkadot parachains By Investing.com

http://Investing.com - Blockchain data analytics platform Dune has integrated over 50 parachains from the Polkadot ecosystem. The integration provides real-time insights and analytics for developers, investors, and data enthusiasts.

Following its initial support for Polkadot, Kusama, and six parachains earlier this year, Dune has expanded its coverage to include the entire Polkadot ecosystem. This addition allows users to analyze all onchain activities across Polkadot in real time, making it easier to access critical data and supporting data-driven decision-making.

Among the newly supported parachains are Moonbeam, a smart contract and cross-chain DeFi hub; Acala, known for decentralized finance; Phala, focusing on privacy-first DePIN and AI solutions; and Mythos, which supports decentralized gaming.

Dune users can now track transaction flows, DeFi activity, gaming developments, and NFTs on a single platform.

Dune CEO Fredrik Haga said: "Polkadot and its Substrate-based chains form a vast and complex ecosystem. With this integration of 50+ parachains, our goal is to make that complexity easier to navigate. We want to give people a clear, accessible view of what's happening across the network, so they can focus on innovation and building with confidence."

The integration was developed in collaboration with Colorful Notion and includes improved features via the Dune API, enabling users to convert queries into flexible API endpoints for smoother integration into their applications.

Although blockchain data is inherently on-chain and transparent, it can be challenging to interpret or analyze. The complexity and fragmentation of this data often pose obstacles for businesses and researchers looking to harness it for insights, innovation, or competitive advantage.

Dune is structured to allow community members to use its abstraction layer, Spellbook, to analyze raw blockchain data, which would otherwise be a complex and time-consuming task. This approach makes it easier for data scientists, analytics professionals, and businesses to access a wide range of blockchain data across different protocols and solutions, including Polkadot, Arbitrum, Base, Bitcoin, Ethereum, Optimism, Solana, and more.

Read more

View original article

September 11, 2024 12:34:09PM

Investing.com India

a month ago

Luka

Human ResourcesWorkflowEnterprise Information IntegrationTechnology

Customers Unify Critical Data and Processes with New Oracle Primavera Unifier Capabilities

Customers Unify Critical Data and Processes with New Oracle Primavera Unifier Capabilities - CRN - India

August 7, 2024 8:39:03AM

CRN - India

A core part of the Oracle Smart Construction Platform, customers continue to rely on Primavera Unifier to digitise and automate capital planning, project execution, and facilities management, to help improve predictability and reliability. To speed execution and efficiency, Oracle has released the Primavera Unifier Accelerator base. With 65 new preconfigured business processes, this new configuration helps customers save time and can eliminate costly process design and custom reporting. When used as a starting point, the Unifier Accelerator has helped early-adopted users expedite implementation time by up to 75%1.

With the new Unifier Accelerator configuration, which includes 250 out-of-the-box reports and dashboards, Primavera Unifier helps organisations better manage cash flow, forecasts, contracts, scope changes, project administration including daily reports, RFI's, and more.

"Northwell Health strives to lead and innovate, not only in the care we provide, but also in how we build and maintain our healthcare facilities," said Ian Jablonski, Director of Strategy for Systems, Data, and Services for Facilities Services, Northwell Health. "Oracle Primavera Unifier has been a key component, supporting the unification of our processes and data on our journey toward maximising efficiency across more than 900 Northwell facilities through optimised processes, system standardisation, data integration, and predictive analytics."

Extensions of Unifier Accelerator process-specific capabilities to help improve compliance and outcomes include NEC4 contract management with preconfigured contract templates and reports. Additionally, Earned Value Management, Progress Measurement, Advanced Work Packaging, and Facilities Management are other capabilities supported by Primavera Unifier and the Unifier Accelerator. The connected architecture can help unify cost, contract, change, schedule milestone, workflow, and document management processes within a single solution. Many of these processes are now supported by the newly updated Primavera Unifier mobile app, including location-based services that can instantly tag on-site inspections to groups and manage issues based on geography more easily.

Connected AI

With integrations to Oracle Primavera P6 and Oracle Primavera Cloud scheduling solutions, organisations using the Unifier Accelerator base configuration can unite all their processes and data in applications across their Oracle Engineering and Construction portfolio, including Oracle Construction Intelligence Cloud. This enables them to leverage built-in predictive analytics and AI insights to better detect and reduce project risk. For example, customers can leverage the Safety Risk Forecast, a custom predictive model that enables users to reduce safety risk by proactively identifying high-risk projects through an incident-warning system. This information enables organisations to take actions that can prevent incidents and losses before they occur.

"The Primavera Unifier Accelerator is a natural new addition from Oracle Construction and Engineering," Vernon Harley, project & portfolio management solutions lead, Accenture. "Given the many cross-industry relevant business processes that are required to drive core project controls and given their learnings from many past implementations of Primavera Unifier, it makes perfect sense for Oracle to offer this configuration package. It will greatly reduce implementation time for clients who need core project controls business processes to be available in production for the end users more quickly."

Integrating systems to unify data

Leveraging Oracle Integration Cloud (OIC) middleware, organisations can also connect Primavera Unifier to its other Oracle and third-party business systems, including enterprise resource planning (ERP) and enterprise asset management (EAM) systems, more consistently, reducing integration time and risk. Users can additionally streamline integrations with mature RESTful Primavera Unifier APIs that support Oracle and third-party middleware solutions with Unifier integration templates now available from the Oracle middleware marketplace.

"It's more important than ever for our customers to quickly recognise the benefits from digitising processes and unifying data," says Mark Webster, senior vice president and general manager, Oracle Construction and Engineering. "With our increased strategic investment in Primavera Unifier and other components of the Oracle Smart Construction Platform, we are transforming the delivery of capital and construction projects by addressing key customer requirements, enhancing interoperability, and bringing the expanded capabilities to market faster."

Read more

View original article

August 7, 2024 8:39:03AM

CRN - India

Similar articles from other sources

Show(7)

Customers unify critical data and processes with new Oracle Primavera Unifier capabilities - CRN - India

A core part of the Oracle Smart Construction Platform, customers continue to rely on Primavera Unifier to digitize and automate capital planning, project execution, and facilities management, to help improve predictability and reliability. To speed execution and efficiency, Oracle has released the Primavera Unifier Accelerator base. With 65 new preconfigured business processes, this new configuration helps customers save time and can eliminate costly process design and custom reporting. When used as a starting point, the Unifier Accelerator has helped early-adopted users expedite implementation time by up to 75%1.

With the new Unifier Accelerator configuration, which includes 250 out-of-the-box reports and dashboards, Primavera Unifier helps organizations better manage cash flow, forecasts, contracts, scope changes, project administration including daily reports, RFI's, and more.

"Northwell Health strives to lead and innovate, not only in the care we provide, but also how we build and maintain our healthcare facilities," said Ian Jablonski, Director of Strategy for Systems, Data, and Services for Facilities Services, Northwell Health. "Oracle Primavera Unifier has been a key component, supporting the unification of our processes and data on our journey toward maximizing efficiency across more than 900 Northwell facilities through optimized processes, system standardization, data integration, and predictive analytics."

Extensions of Unifier Accelerator process-specific capabilities to help improve compliance and outcomes include NEC4 contract management with preconfigured contract templates and reports. Additionally, Earned Value Management, Progress Measurement, Advanced Work Packaging, and Facilities Management are other capabilities supported by Primavera Unifier and the Unifier Accelerator. The connected architecture can help unify cost, contract, change, schedule milestone, workflow, and document management processes within a single solution. Many of these processes are now supported by the newly updated Primavera Unifier mobile app, including location-based services that can instantly tag on-site inspections to groups and manage issues based on geography more easily.

Connected AI

With integrations to Oracle Primavera P6 and Oracle Primavera Cloud scheduling solutions, organizations using the Unifier Accelerator base configuration can unite all their processes and data in applications across their Oracle Engineering and Construction portfolio, including Oracle Construction Intelligence Cloud. This enables them to leverage built-in predictive analytics and AI insights to better detect and reduce project risk. For example, customers can leverage the Safety Risk Forecast, a custom predictive model that enables users to reduce safety risk by proactively identifying high-risk projects through an incident-warning system. This information enables organizations to take actions that can prevent incidents and losses before they occur.

"The Primavera Unifier Accelerator is a natural new addition from Oracle Construction and Engineering," Vernon Harley, project & portfolio management solutions lead, Accenture. "Given the many cross-industry relevant business processes that are required to drive core project controls and given their learnings from many past implementations of Primavera Unifier, it makes perfect sense for Oracle to offer this configuration package. It will greatly reduce implementation time for clients who need core project controls business processes to be available in production for the end users more quickly."

Integrating systems to unify data

Leveraging Oracle Integration Cloud (OIC) middleware, organizations can also connect Primavera Unifier to its other Oracle and third-party business systems, including enterprise resource planning (ERP) and enterprise asset management (EAM) systems, more consistently, reducing integration time and risk. Users can additionally streamline integrations with mature RESTful Primavera Unifier APIs that support Oracle and third-party middleware solutions with Unifier integration templates now available from the Oracle middleware marketplace.

"It's more important than ever for our customers to quickly recognize the benefits from digitizing processes and unifying data," says Mark Webster, senior vice president and general manager, Oracle Construction and Engineering. "With our increased strategic investment in Primavera Unifier and other components of the Oracle Smart Construction Platform, we are transforming the delivery of capital and construction projects by addressing key customer requirements, enhancing interoperability, and bringing the expanded capabilities to market faster."

Read more

View original article

August 7, 2024 8:25:22AM

CRN - India

Customers unify critical data and processes with New Oracle Primavera unifier capabilities - Express Computer

A core part of the Oracle Smart Construction Platform, customers continue to rely on Primavera Unifier to digitise and automate capital planning, project execution, and facilities management, to help improve predictability and reliability. To speed execution and efficiency, Oracle has released the Primavera Unifier Accelerator base. With 65 new preconfigured business processes, this new configuration helps customers save time and can eliminate costly process design and custom reporting. When used as a starting point, the Unifier Accelerator has helped early-adopted users expedite implementation time by up to 75%1.

With the new Unifier Accelerator configuration, which includes 250 out-of-the-box reports and dashboards, Primavera Unifier helps organisations better manage cash flow, forecasts, contracts, scope changes, project administration including daily reports, RFI's, and more.

"Northwell Health strives to lead and innovate, not only in the care we provide, but also how we build and maintain our healthcare facilities," said Ian Jablonski, Director of Strategy for Systems, Data, and Services for Facilities Services, Northwell Health. "Oracle Primavera Unifier has been a key component, supporting the unification of our processes and data on our journey toward maximising efficiency across more than 900 Northwell facilities through optimised processes, system standardisation, data integration, and predictive analytics."

Extensions of Unifier Accelerator process-specific capabilities to help improve compliance and outcomes include NEC4 contract management with preconfigured contract templates and reports. Additionally, Earned Value Management, Progress Measurement, Advanced Work Packaging, and Facilities Management are other capabilities supported by Primavera Unifier and the Unifier Accelerator. The connected architecture can help unify cost, contract, change, schedule milestone, workflow, and document management processes within a single solution. Many of these processes are now supported by the newly updated Primavera Unifier mobile app, including location-based services that can instantly tag on-site inspections to groups and manage issues based on geography more easily.

Connected AI

With integrations to Oracle Primavera P6 and Oracle Primavera Cloud scheduling solutions, organisations using the Unifier Accelerator base configuration can unite all their processes and data in applications across their Oracle Engineering and Construction portfolio, including Oracle Construction Intelligence Cloud. This enables them to leverage built-in predictive analytics and AI insights to better detect and reduce project risk. For example, customers can leverage the Safety Risk Forecast, a custom predictive model that enables users to reduce safety risk by proactively identifying high-risk projects through an incident-warning system. This information enables organizations to take actions that can prevent incidents and losses before they occur.

"The Primavera Unifier Accelerator is a natural new addition from Oracle Construction and Engineering," Vernon Harley, project & portfolio management solutions lead, Accenture. "Given the many cross-industry relevant business processes that are required to drive core project controls and given their learnings from many past implementations of Primavera Unifier, it makes perfect sense for Oracle to offer this configuration package. It will greatly reduce implementation time for clients who need core project controls business processes to be available in production for the end users more quickly."

Integrating systems to unify data

Leveraging Oracle Integration Cloud (OIC) middleware, organisations can also connect Primavera Unifier to its other Oracle and third-party business systems, including enterprise resource planning (ERP) and enterprise asset management (EAM) systems, more consistently, reducing integration time and risk. Users can additionally streamline integrations with mature RESTful Primavera Unifier APIs that support Oracle and third-party middleware solutions with Unifier integration templates now available from the Oracle middleware marketplace.

"It's more important than ever for our customers to quickly recognise the benefits from digitising processes and unifying data," says Mark Webster, senior vice president and general manager, Oracle Construction and Engineering. "With our increased strategic investment in Primavera Unifier and other components of the Oracle Smart Construction Platform, we are transforming the delivery of capital and construction projects by addressing key customer requirements, enhancing interoperability, and bringing the expanded capabilities to market faster."

Read more

View original article

August 7, 2024 6:56:55AM

Express Computer

Syniverse to Offer Personalized Messaging Services to Oracle Cloud Customers

Syniverse has announced that its Syniverse Communication Gateway has achieved Integrated with Oracle Cloud Expertise and is now available in the Oracle Cloud Marketplace, offering added value to Oracle Cloud customers.

The Syniverse Communication Gateway on Oracle Cloud Marketplace enables businesses and organizations of all sizes to interact more securely with prospective employees through SMS messaging via Oracle Fusion Cloud Recruiting, part of Oracle Fusion Cloud Human Capital Management (HCM). With a global reach, these businesses are empowered to build effective, reliable connections with candidates directly through Oracle Cloud Recruiting, allowing for the exchange of automated, personalized messages regardless of location.

Integrated with Oracle Cloud, Syniverse Communication Gateway offers Oracle Cloud customers instant, personalized communication; improved security; global availability; and turnkey support.

Integrated with Oracle Cloud Expertise recognizes OPN members with solutions that integrate with Oracle Cloud. For partners earning the Powered by Oracle Cloud Expertise, this achievement offers customers confidence that the partner's application is supported by the Oracle Cloud Infrastructure SLA, enabling full access and control over their cloud infrastructure services as well as consistent performance.

Read more

View original article

August 7, 2024 4:13:00AM

thefastmode.com

Customers Unify Critical Data and Processes with New Oracle Primavera Unifier Capabilities

Preconfigured business processes, reports, and dashboards speed implementation and time to value by up to 75%

AUSTIN, Texas, Aug. 6, 2024 /PRNewswire/ -- A core part of the Oracle Smart Construction Platform, customers continue to rely on Primavera Unifier to digitize and automate capital planning, project execution, and facilities management, to help improve predictability and reliability. To speed execution and efficiency, Oracle has released the Primavera Unifier Accelerator base. With 65 new preconfigured business processes, this new configuration helps customers save time and can eliminate costly process design and custom reporting. When used as a starting point, the Unifier Accelerator has helped early-adopted users expedite implementation time by up to 75%.

With the new Unifier Accelerator configuration, which includes 250 out-of-the-box reports and dashboards, Primavera Unifier helps organizations better manage cash flow, forecasts, contracts, scope changes, project administration including daily reports, RFI's, and more.

"Northwell Health strives to lead and innovate, not only in the care we provide, but also how we build and maintain our healthcare facilities," said Ian Jablonski, Director of Strategy for Systems, Data, and Services for Facilities Services, Northwell Health. "Oracle Primavera Unifier has been a key component, supporting the unification of our processes and data on our journey toward maximizing efficiency across more than 900 Northwell facilities through optimized processes, system standardization, data integration, and predictive analytics."

Extensions of Unifier Accelerator process-specific capabilities to help improve compliance and outcomes include NEC4 contract management with preconfigured contract templates and reports. Additionally, Earned Value Management, Progress Measurement, Advanced Work Packaging, and Facilities Management are other capabilities supported by Primavera Unifier and the Unifier Accelerator. The connected architecture can help unify cost, contract, change, schedule milestone, workflow, and document management processes within a single solution. Many of these processes are now supported by the newly updated Primavera Unifier mobile app, including location-based services that can instantly tag on-site inspections to groups and manage issues based on geography more easily.

Connected AI

With integrations to Oracle Primavera P6 and Oracle Primavera Cloud scheduling solutions, organizations using the Unifier Accelerator base configuration can unite all their processes and data in applications across their Oracle Engineering and Construction portfolio, including Oracle Construction Intelligence Cloud. This enables them to leverage built-in predictive analytics and AI insights to better detect and reduce project risk. For example, customers can leverage the Safety Risk Forecast, a custom predictive model that enables users to reduce safety risk by proactively identifying high-risk projects through an incident-warning system. This information enables organizations to take actions that can prevent incidents and losses before they occur.

"The Primavera Unifier Accelerator is a natural new addition from Oracle Construction and Engineering," Vernon Harley, project & portfolio management solutions lead, Accenture. "Given the many cross-industry relevant business processes that are required to drive core project controls and given their learnings from many past implementations of Primavera Unifier, it makes perfect sense for Oracle to offer this configuration package. It will greatly reduce implementation time for clients who need core project controls business processes to be available in production for the end users more quickly."

Integrating systems to unify data

Leveraging Oracle Integration Cloud (OIC) middleware, organizations can also connect Primavera Unifier to its other Oracle and third-party business systems, including enterprise resource planning (ERP) and enterprise asset management (EAM) systems, more consistently, reducing integration time and risk. Users can additionally streamline integrations with mature RESTful Primavera Unifier APIs that support Oracle and third-party middleware solutions with Unifier integration templates now available from the Oracle middleware marketplace.

"It's more important than ever for our customers to quickly recognize the benefits from digitizing processes and unifying data," says Mark Webster, senior vice president and general manager, Oracle Construction and Engineering. "With our increased strategic investment in Primavera Unifier and other components of the Oracle Smart Construction Platform, we are transforming the delivery of capital and construction projects by addressing key customer requirements, enhancing interoperability, and bringing the expanded capabilities to market faster."

1. Oracle consulting, CS4i. collected this data in June 2024.

About Oracle

Oracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at http://oracle.com.

Trademarks

Oracle, Java, MySQL and NetSuite are registered trademarks of Oracle Corporation. NetSuite was the first cloud company--ushering in the new era of cloud computing.

View original content to download multimedia:https://www.prnewswire.com/news-releases/customers-unify-critical-data-and-processes-with-new-oracle-primavera-unifier-capabilities-302214778.html

Read more

View original article

August 6, 2024 2:24:08PM

IT News Online

Customers Unify Critical Data and Processes with New Oracle Primavera Unifier Capabilities By Investing.com

Preconfigured business processes, reports, and dashboards speed implementation and time to value by up to 75%

, /PRNewswire/ -- A core part of the Oracle Smart Construction Platform, customers continue to rely on Primavera Unifier to digitize and automate capital planning, project execution, and facilities management, to help improve predictability and reliability. To speed execution and efficiency, Oracle has released the Primavera Unifier Accelerator base. With 65 new preconfigured business processes, this new configuration helps customers save time and can eliminate costly process design and custom reporting. When used as a starting point, the Unifier Accelerator has helped early-adopted users expedite implementation time by up to 75%.

With the new Unifier Accelerator configuration, which includes 250 out-of-the-box reports and dashboards, Primavera Unifier helps organizations better manage cash flow, forecasts, contracts, scope changes, project administration including daily reports, RFI's, and more.

"Northwell Health strives to lead and innovate, not only in the care we provide, but also how we build and maintain our healthcare facilities," said , Director of Strategy for Systems, Data, and Services for Facilities Services, Northwell Health. "Oracle Primavera Unifier has been a key component, supporting the unification of our processes and data on our journey toward maximizing efficiency across more than 900 Northwell facilities through optimized processes, system standardization, data integration, and predictive analytics."

Extensions of Unifier Accelerator process-specific capabilities to help improve compliance and outcomes include NEC4 contract management with preconfigured contract templates and reports. Additionally, Earned Value Management, Progress Measurement, Advanced Work Packaging (NYSE:PKG), and Facilities Management are other capabilities supported by Primavera Unifier and the Unifier Accelerator. The connected architecture can help unify cost, contract, change, schedule milestone, workflow, and document management processes within a single solution. Many of these processes are now supported by the newly updated Primavera Unifier mobile app, including location-based services that can instantly tag on-site inspections to groups and manage issues based on geography more easily.

Connected AI

With integrations to Oracle Primavera P6 and Oracle scheduling solutions, organizations using the Unifier Accelerator base configuration can unite all their processes and data in applications across their Oracle Engineering and Construction portfolio, including Oracle Construction Intelligence Cloud. This enables them to leverage built-in predictive analytics and AI insights to better detect and reduce project risk. For example, customers can leverage the Safety Risk Forecast, a custom predictive model that enables users to reduce safety risk by proactively identifying high-risk projects through an incident-warning system. This information enables organizations to take actions that can prevent incidents and losses before they occur.

"The Primavera Unifier Accelerator is a natural new addition from Oracle Construction and Engineering," , project & portfolio management solutions lead, Accenture (NYSE:ACN). "Given the many cross-industry relevant business processes that are required to drive core project controls and given their learnings from many past implementations of Primavera Unifier, it makes perfect sense for Oracle to offer this configuration package. It will greatly reduce implementation time for clients who need core project controls business processes to be available in production for the end users more quickly."

Integrating systems to unify data

Leveraging Oracle Integration Cloud (OIC) middleware, organizations can also connect Primavera Unifier to its other Oracle and third-party business systems, including enterprise resource planning (ERP) and enterprise asset management (EAM) systems, more consistently, reducing integration time and risk. Users can additionally streamline integrations with mature RESTful Primavera Unifier APIs that support Oracle and third-party middleware solutions with Unifier integration templates now available from the Oracle middleware marketplace.

"It's more important than ever for our customers to quickly recognize the benefits from digitizing processes and unifying data," says , senior vice president and general manager, Oracle Construction and Engineering. "With our increased strategic investment in Primavera Unifier and other components of the Oracle Smart Construction Platform, we are transforming the delivery of capital and construction projects by addressing key customer requirements, enhancing interoperability, and bringing the expanded capabilities to market faster."

About Oracle Construction and Engineering

Asset owners and project leaders rely on Oracle Construction and Engineering solutions for the visibility and control, connected supply chain, and data security needed to drive performance and mitigate risk across their processes, projects, and organization. Our scalable cloud construction management software solutions enable digital transformation for teams that plan, build, and operate critical assets, improving efficiency, collaboration, and change control across the project lifecycle. http://www.oracle.com/construction-and-engineering.

1. Oracle consulting, CS4i. collected this data in .

About Oracle

Oracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at http://oracle.com.

Read more

View original article

August 6, 2024 1:59:50PM

Investing.com

Customers Unify Critical Data and Processes with New Oracle Primavera Unifier Capabilities

Preconfigured business processes, reports, and dashboards speed implementation and time to value by up to 75%

AUSTIN, Texas, Aug. 6, 2024 /PRNewswire/ -- A core part of the Oracle Smart Construction Platform, customers continue to rely on Primavera Unifier to digitize and automate capital planning, project execution, and facilities management, to help improve predictability and reliability. To speed execution and efficiency, Oracle has released the Primavera Unifier Accelerator base. With 65 new preconfigured business processes, this new configuration helps customers save time and can eliminate costly process design and custom reporting. When used as a starting point, the Unifier Accelerator has helped early-adopted users expedite implementation time by up to 75%.

With the new Unifier Accelerator configuration, which includes 250 out-of-the-box reports and dashboards, Primavera Unifier helps organizations better manage cash flow, forecasts, contracts, scope changes, project administration including daily reports, RFI's, and more.

"Northwell Health strives to lead and innovate, not only in the care we provide, but also how we build and maintain our healthcare facilities," said Ian Jablonski, Director of Strategy for Systems, Data, and Services for Facilities Services, Northwell Health. "Oracle Primavera Unifier has been a key component, supporting the unification of our processes and data on our journey toward maximizing efficiency across more than 900 Northwell facilities through optimized processes, system standardization, data integration, and predictive analytics."

Extensions of Unifier Accelerator process-specific capabilities to help improve compliance and outcomes include NEC4 contract management with preconfigured contract templates and reports. Additionally, Earned Value Management, Progress Measurement, Advanced Work Packaging, and Facilities Management are other capabilities supported by Primavera Unifier and the Unifier Accelerator. The connected architecture can help unify cost, contract, change, schedule milestone, workflow, and document management processes within a single solution. Many of these processes are now supported by the newly updated Primavera Unifier mobile app, including location-based services that can instantly tag on-site inspections to groups and manage issues based on geography more easily.

Connected AI

With integrations to Oracle Primavera P6 and Oracle Primavera Cloud scheduling solutions, organizations using the Unifier Accelerator base configuration can unite all their processes and data in applications across their Oracle Engineering and Construction portfolio, including Oracle Construction Intelligence Cloud. This enables them to leverage built-in predictive analytics and AI insights to better detect and reduce project risk. For example, customers can leverage the Safety Risk Forecast, a custom predictive model that enables users to reduce safety risk by proactively identifying high-risk projects through an incident-warning system. This information enables organizations to take actions that can prevent incidents and losses before they occur.

"The Primavera Unifier Accelerator is a natural new addition from Oracle Construction and Engineering," Vernon Harley, project & portfolio management solutions lead, Accenture. "Given the many cross-industry relevant business processes that are required to drive core project controls and given their learnings from many past implementations of Primavera Unifier, it makes perfect sense for Oracle to offer this configuration package. It will greatly reduce implementation time for clients who need core project controls business processes to be available in production for the end users more quickly."

Integrating systems to unify data

Leveraging Oracle Integration Cloud (OIC) middleware, organizations can also connect Primavera Unifier to its other Oracle and third-party business systems, including enterprise resource planning (ERP) and enterprise asset management (EAM) systems, more consistently, reducing integration time and risk. Users can additionally streamline integrations with mature RESTful Primavera Unifier APIs that support Oracle and third-party middleware solutions with Unifier integration templates now available from the Oracle middleware marketplace.

"It's more important than ever for our customers to quickly recognize the benefits from digitizing processes and unifying data," says Mark Webster, senior vice president and general manager, Oracle Construction and Engineering. "With our increased strategic investment in Primavera Unifier and other components of the Oracle Smart Construction Platform, we are transforming the delivery of capital and construction projects by addressing key customer requirements, enhancing interoperability, and bringing the expanded capabilities to market faster."

About Oracle Construction and Engineering

Asset owners and project leaders rely on Oracle Construction and Engineering solutions for the visibility and control, connected supply chain, and data security needed to drive performance and mitigate risk across their processes, projects, and organization. Our scalable cloud construction management software solutions enable digital transformation for teams that plan, build, and operate critical assets, improving efficiency, collaboration, and change control across the project lifecycle. http://www.oracle.com/construction-and-engineering.

1. Oracle consulting, CS4i. collected this data in June 2024.

About Oracle

Oracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at http://oracle.com.

Trademarks

Oracle, Java, MySQL and NetSuite are registered trademarks of Oracle Corporation. NetSuite was the first cloud company--ushering in the new era of cloud computing.

Read more

View original article

August 6, 2024 1:59:25PM

Yahoo! Finance

Odyssey House implements Oracle Health to help enhance patient care, improve revenue management - Utah Business

Austin, TX -- Odyssey House, a community-based mental health and substance abuse treatment provider in Utah, has gone live with Oracle Health's electronic health record (EHR) and behavioral health solutions at its Martindale Clinic. By replacing its previous health records system with Oracle Health, Odyssey House will be able to further automate billing processes and give clinicians an all-encompassing view of patient data to help make more informed care decisions. With connected data and systems, Odyssey House can also reduce administrative overhead, improve financial transparency and control, and simplify its operational reporting.

Odyssey House is one of the largest and most comprehensive addiction programs in Utah. The organization offers adult outpatient and residential treatment services, as well as programming for parents with children, youth residential, sober housing, criminal justice, and alumni services across 29 care sites throughout the state.

"Oracle Health is the only EHR vendor that has been able to meet the needs of our agency with behavioral health charting, primary care charting, and an integrated revenue cycle solution," said Adam Cohen, chief executive officer, Odyssey House. "With Oracle Health, we can implement interoperable solutions that communicate with one another to help care for the whole patient."

As its program grew over the years, Odyssey House implemented multiple disparate solutions that ultimately required redundant data entry and complex manual processes to ensure that patient information was accessible and accurate for care teams, as well as for front and back-office staff. Following a thorough vendor review, Odyssey House selected Oracle Health's EHR to replace its legacy systems and drive clinical and operational efficiency.

Odyssey House recently went live with Oracle Health's EHR at its primary care Martindale Clinic. With the Oracle Health EHR and more than 200 behavioral health-specific screening and assessment tools, staff will be able to take advantage of the system's simplified workflows to create a more comprehensive patient record. These solutions, together with other features, including embedded telehealth capabilities and on-demand reporting and analytics, can help clinicians make more-informed care decisions.

The clinic will also benefit from Oracle Health EHR's revenue cycle management solutions. These tools will enable Odyssey House to aggregate clinical and financial data, automate chart completion, and simplify the coding workflow to help reduce manual processes, improve billing accuracy, and increase reimbursement rates.

"For many people, mental health is a years or lifelong battle, making it essential for caregivers to have a complete record of their patient's health and treatment journey," said Seema Verma, executive vice president and general manager, Oracle Health and Life Sciences. "With Oracle's connected technologies, providers like Odyssey House can combine mental health and primary care data to help providers make the most informed care decisions, while maximizing opportunities as an organization by better managing revenue and optimizing efficiency."

Learn more about how Oracle is advancing healthcare at https://www.oracle.com/health/.

About Oracle

Oracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at http://www.oracle.com.

Trademarks

Oracle, Java, MySQL and NetSuite are registered trademarks of Oracle Corporation. NetSuite was the first cloud company -- ushering in the new era of cloud computing.

Read more

View original article

August 5, 2024 5:01:51PM

Utah Business

2 months ago

Luka

Master Data ManagementData CentersEnterprise Information IntegrationTechnology

The Data That Powers A.I. Is Disappearing Fast

Apple says it took a 'responsible' approach to training its Apple Intelligence models

July 30, 2024 2:31:39AM

TechCrunch

Apple has published a technical paper detailing the models that it developed to power Apple Intelligence, the range of generative AI features headed to iOS, macOS and iPadOS over the next few months.

In the paper, Apple pushes back against accusations that it took an ethically questionable approach to training some of its models, reiterating that it didn't use private user data and drew on a combination of publicly available and licensed data for Apple Intelligence.

"[The] pre-training data set consists of ... data we have licensed from publishers, curated publicly available or open-sourced datasets and publicly available information crawled by our web crawler, Applebot," Apple writes in the paper. "Given our focus on protecting user privacy, we note that no private Apple user data is included in the data mixture."

In July, Proof News reported that Apple used a data set called The Pile, which contains subtitles from hundreds of thousands of YouTube videos, to train a family of models designed for on-device processing. Many YouTube creators whose subtitles were swept up in The Pile weren't aware of and didn't consent to this; Apple later released a statement saying that it didn't intend to use those models to power any AI features in its products.

The technical paper, which peels back the curtains on models Apple first revealed at WWDC 2024 in June, called Apple Foundation Models (AFM), emphasizes that the training data for the AFM models was sourced in a "responsible" way -- or responsible by Apple's definition, at least.

The AFM models' training data includes publicly available web data as well as licensed data from undisclosed publishers. According to The New York Times, Apple reached out to several publishers toward the end of 2023, including NBC, Condé Nast and IAC, about multi-year deals worth at least $50 million to train models on publishers' news archives. Apple's AFM models were also trained on open source code hosted on GitHub, specifically Swift, Python, C, Objective-C, C++, JavaScript, Java and Go code.

Training models on code without permission, even open code, is a point of contention among developers. Some open source codebases aren't licensed or don't allow for AI training in their terms of use, some developers argue. But Apple says that it "license-filtered" for code to try to include only repositories with minimal usage restrictions, like those under an MIT, ISC or Apache license.

To boost the AFM models' mathematics skills, Apple specifically included in the training set math questions and answers from webpages, math forums, blogs, tutorials and seminars, according to the paper. The company also tapped "high-quality, publicly-available" data sets (which the paper doesn't name) with "licenses that permit use for training ... models," filtered to remove sensitive information.

All told, the training data set for the AFM models weighs in at about 6.3 trillion tokens. (Tokens are bite-sized pieces of data that are generally easier for generative AI models to ingest.) For comparison, that's less than half the number of tokens -- 15 trillion -- Meta used to train its flagship text-generating model, Llama 3.1 405B.

Apple sourced additional data, including data from human feedback and synthetic data, to fine-tune the AFM models and attempt to mitigate any undesirable behaviors, like spouting toxicity.

"Our models have been created with the purpose of helping users do everyday activities across their Apple products, grounded

in Apple's core values, and rooted in our responsible AI principles at every stage," the company says.

There's no smoking gun or shocking insight in the paper -- and that's by careful design. Rarely are papers like these very revealing, owing to competitive pressures but also because disclosing too much could land companies in legal trouble.

Some companies training models by scraping public web data assert that their practice is protected by fair use doctrine. But it's a matter that's very much up for debate and the subject of a growing number of lawsuits.

Apple notes in the paper that it allows webmasters to block its crawler from scraping their data. But that leaves individual creators in a lurch. What's an artist to do if, for example, their portfolio is hosted on a site that refuses to block Apple's data scraping?

Courtroom battles will decide the fate of generative AI models and the way they're trained. For now, though, Apple's trying to position itself as an ethical player while avoiding unwanted legal scrutiny.

Read more

View original article

July 30, 2024 2:31:39AM

TechCrunch

Similar articles from other sources

Show(7)

Teaching computers to forget

Policymakers have been grappling with the rising complexity of Machine Learning (ML) models that churn huge swathes of data through Large Language Models (LLMs) and deep neural networks. The complexity has made it difficult for data fiduciaries to effectively "correct, complete, update and erase" sensitive data from computer systems. Simultaneously, we are witnessing an increase in AI (Artificial Intelligence) bias, misinformation, and breach of privacy, which gets heightened during events such as elections.

OPINION | Life in the times of AI

The antithesis of ML

In order to deal with this problem, a possible solution that has ignited interest among researchers and companies alike is the idea of Machine Unlearning (MUL). First mooted by Cao and Yang in 'Towards Making Systems Forget with Machine Unlearning', MUL ponders upon the question of how we can make machines forget data from trained AI models. It is the antithesis of ML. An algorithm is added to the AI model for the purpose of identifying and deleting false, incorrect, discriminatory, outdated, and sensitive information.

The concept builds on the challenge of removing information due to the constant churning of data by these LLMs. So much so that it gets difficult to keep track of the data as it can be utilised for multiple objectives, creating a complex web of algorithms, also known as data lineage, that adversely affect its quality, leading to manipulation, adversarial outputs, and difficulty in locating and removing sensitive information. Moreover, as there is no sandbox approach for choosing and processing data in these models, there is also a proven possibility of hackers inserting manipulated data to produce biased results (data poisoning).

One might argue for simply deleting the entire data set, i.e. data pruning, and re-training the entire AI model. However, it will lead to inflated computational costs and undue delays for the data fiduciaries while simultaneously carrying the risk of losing substantial accuracy. Consequently, MUL is gaining traction as a viable option among data fiduciaries such as IBM where the models are being tested for enhanced unlearning accuracy, intelligibility, reduced unlearning time and cost efficiency.

Three approaches

The question, however, remains how a MUL model can be implemented to effectively fulfil the obligation. There could be three approaches based on their viability for on-ground implementation: private, public, and international. In the private approach, data fiduciaries will be primarily responsible for testing MUL algorithms, which can then be applied across their training models for efficient deletion based on specific requirements. This voluntary approach gives companies much headroom to enhance their AI models and preserve users' rights without undue government intervention. However, the problem occurs in expertise and affordability to execute these models, which might discourage smaller companies from testing the solutions. This is the model currently being followed, albeit at a preliminary stage.

In the public approach, the government has the responsibility to prepare the statutory blueprint, either through soft-law or hard-law approaches, to obligate data fiduciaries to fulfil their legal obligations. This approach has to be read with the context of rising mentions of AI in legislative proceedings (from 1,247 in 2022 to 2,175 in 2023) across 49 jurisdictions. The data reflect a high possibility of government intervention in the near future if a major breakthrough in a MUL model parallels the rising regulatory landscape. The government can issue guidelines under the respective Data or AI Protection Regime mandating that data fiduciaries implement a plausible MUL model. For instance, the European Union's AI Act has adopted a soft-law approach by adding a provision to tackle data poisoning. It considers data poisoning as a form of cyber attack and directs data fiduciaries to put security controls "to ensure a level of cybersecurity appropriate to the risks."

On the contrary, the government can itself prepare a MUL model as part of its Digital Public Infrastructure for the perusal of data fiduciaries to implement across platforms uniformly. This is especially useful in developing countries where the state has substantive stakes in the DPI for the country's overall development. Moreover, it addresses the problem of affordability and expertise for smaller companies.

The international approach emphasises the role of nation states in coming together and preparing a framework to be adopted uniformly at a domestic level. The rationale flows from the idea that any innovation in AI has trans-boundary implications, and it is preferable to follow uniform standards across jurisdictions as a step ahead towards global governance of AI. As the efficacy of this approach is not clear amid geopolitical frictions, the onus effectively shifts to the role of international standard-setting organisations such as the International Electrotechnical Commission to come up with MUL standards that can be applied across jurisdictions.

These approaches represent a formal blueprint for one of the solutions that can be utilised to subdue the menace of Generative AI and preserve the user's right to be forgotten. The MUL is still in the preliminary stages. Therefore, stakeholders must address technical and regulatory considerations to ensure its effective implementation in this evolving landscape of AI.

Read Comments

Read more

View original article

July 29, 2024 9:30:18PM

The Hindu

Can AI Truly Self-Improve? New Findings Show Human Data Crucial for AI Advancement

AI models trained on synthetic data may experience rapid degradation in output quality

Artificial Intelligence (AI) systems, such as ChatGPT, have made remarkable progress, yet they still need human assistance for enhancement and accuracy. According to a new study published by Nature, AI systems remain heavily reliant on large datasets curated and labelled by humans to learn and generate their responses.

Despite significant advancements in large language models (LLMs), one major limitation is their inability to train on their content to improve intelligence. Training an AI involves feeding it vast amounts of data to help it understand context, language patterns, and various nuances before fine-tuning based on performance.

In Nature's groundbreaking study, researchers experimented by providing LLMs with AI-generated text. This approach led to models needing to remember the less frequently mentioned information in their datasets, causing their outputs to become more homogeneous and eventually nonsensical. "The message is, we have to be very careful about what ends up in our training data," says co-author Zakhar Shumaylov, an AI researcher at the University of Cambridge, UK. Otherwise, "things will always, provably, go wrong," he added.

The team used mathematical analysis to show that the problem of model collapse is likely to be universal, affecting all sizes of language models that use uncurated data, simple image generators, and other types of AI. "This is a concern when trying to make AI models that represent all groups fairly because low-probability events often relate to marginalised groups," says study co-author Ilia Shumailov, who worked on the project at the University of Oxford, UK.

Language models build associations between tokens -- words or word parts -- in large volumes of text, often scraped from the Internet. They generate text by predicting the statistically most probable next word based on these learned patterns. In the study, the researchers started by using an LLM to create Wikipedia-like entries, then trained new iterations of the model on text produced by its predecessor.

As AI-generated information -- synthetic data -- polluted the training set, the model's outputs became gibberish. For instance, the ninth iteration of the model completed a Wikipedia-style article about English church towers with a treatise on the many colours of jackrabbit tails. The team expected to see errors but were surprised by how quickly "things went wrong."

"Collapse happens because each model is necessarily sampled only from the data on which it is trained. This means that infrequent words in the original data are less likely to be reproduced, and the probability of common ones being regurgitated is boosted," explains Shumaylov. "Complete collapse eventually occurs because each model learns not from reality but from the previous model's prediction of reality, with errors amplified in each iteration."

The study also highlights a more significant issue. As synthetic data builds up on the web, the scaling laws suggesting that models should improve with more data may break down because training data will lose the richness and variety of human-generated content. Hany Farid, a computer scientist at the University of California, Berkeley, compares this problem to "inbreeding within a species." Farid states, "If a species inbreeds with its own offspring and doesn't diversify its gene pool, it can lead to a collapse of the species."

Farid's work has demonstrated similar effects in image models, producing eerie distortions of reality. The solution? Developers might need to find ways, such as watermarking, to keep AI-generated data separate from accurate data, which would require unprecedented coordination by big-tech firms, suggests Shumailov.

Even when Shumailov and his team fine-tuned each model on 10 per cent real data alongside synthetic data, the collapse occurred more slowly. Society might need to find incentives for human creators to continue producing content, and filtering may become essential. For example, humans could curate AI-generated text before it goes back into the data pool.

Read more

View original article

July 29, 2024 7:41:35PM

International Business Times UK

This AI Paper from Stanford Provides New Insights on AI Model Collapse and Data Accumulation

Large-scale generative models like GPT-4, DALL-E, and Stable Diffusion have transformed artificial intelligence, demonstrating remarkable capabilities in generating text, images, and other media. However, as these models become more prevalent, a critical challenge emerges the consequences of training generative models on datasets containing their outputs. This issue, known as model collapse, poses a significant threat to the future development of AI. As generative models are trained on web-scale datasets that increasingly include AI-generated content, researchers are struggling with the potential degradation of model performance over successive iterations, potentially rendering newer models ineffective and compromising the quality of training data for future AI systems.

Existing researchers have investigated model collapse through various methods, including replacing real data with generated data, augmenting fixed datasets, and mixing real and synthetic data. Most studies maintained constant dataset sizes and mixing proportions. Theoretical work has focused on understanding model behavior with synthetic data integration, analyzing high-dimensional regression, self-distillation effects, and language model output tails. Some researchers identified phase transitions in error scaling laws and proposed mitigation strategies. However, these studies primarily considered fixed training data amounts per iteration. Few explored the effects of accumulating data over time, closely resembling evolving internet-based datasets. This research gap highlights the need for further investigation into the long-term consequences of training models on continuously expanding datasets that include both real and synthetic data, reflecting the dynamic nature of web-scale information.

Researchers from Stanford University propose a study that explores the impact of accumulating data on model collapse in generative AI models. Unlike previous research focusing on data replacement, this approach simulates the continuous accumulation of synthetic data in internet-based datasets. Experiments with transformers, diffusion models, and variational autoencoders across various data types reveal that accumulating synthetic data with real data prevents model collapse, in contrast to the performance degradation observed when replacing data. The researchers extend existing analysis of sequential linear models to prove that data accumulation results in a finite, well-controlled upper bound on test error, independent of model-fitting iterations. This finding contrasts with the linear error increase seen in data replacement scenarios.

Researchers experimentally investigated model collapse in generative AI using causal transformers, diffusion models, and variational autoencoders across text, molecular, and image datasets.

To test the model collapse in transformer-based language models researchers used GPT-2 and Llama2 architectures of various sizes, pre-trained on TinyStories. They compared data replacement and accumulation strategies over multiple iterations. Results consistently showed that replacing data increased test cross-entropy (worse performance) across all model configurations and sampling temperatures. In contrast, accumulating data maintained or improved performance over iterations. Lower sampling temperatures accelerated error increases when replacing data, but the overall trend remained consistent. These findings strongly support the hypothesis that data accumulation prevents model collapse in language modeling tasks, while data replacement leads to progressive performance degradation.

Researchers tested GeoDiff diffusion models on GEOM-Drugs molecular conformation data, comparing data replacement and accumulation strategies. Results showed increasing test loss when replacing data, but stable performance when accumulating data. Unlike language models, significant degradation occurred mainly in the first iteration with synthetic data. These findings further support data accumulation as a method to prevent model collapse across different AI domains.

Researchers used VAEs on CelebA face images, comparing data replacement and accumulation strategies. Replacing data led to rapid model collapse, with increasing test error and decreasing image quality and diversity. Accumulating data significantly slowed collapse, preserving major variations but losing minor details over iterations. Unlike language models, accumulation showed slight performance degradation. These findings support data accumulation's benefits in mitigating model collapse across AI domains while highlighting variations in effectiveness depending on model type and dataset.

This research investigates model collapse in AI, a concern as AI-generated content increasingly appears in training datasets. While previous studies showed that training on model outputs can degrade performance, this work demonstrates that model collapse can be prevented by training on a mixture of real and synthetic data. The findings, supported by experiments across various AI domains and theoretical analysis for linear regression, suggest that the "curse of recursion" may be less severe than previously thought, as long as synthetic data is accumulated alongside real data rather than replacing it entirely.

Read more

View original article

July 29, 2024 5:20:19PM

MarkTechPost

Beware of AI 'model collapse': How training on synthetic data pollutes the next generation

To feed the endless appetite of generative artificial intelligence (gen AI) for data, researchers have in recent years increasingly tried to create "synthetic" data, which is similar to the human-created works that have been used to train AI models but is itself created by AI.

The synthetic data movement is a vibrant one because of copyright infringement issues with human-based training data, and also because the requirements of training better and better models may eventually exceed the availability of human-generated data.

Also: 3 ways Meta's Llama 3.1 is an advance for Gen AI

For example, in Meta's flagship open-source model, Llama 3.1 405B, which the company introduced last week, the researchers made extensive use of synthetic data to "fine-tune" the model and to supplement the human feedback they gathered.

There's a catch, though. Oxford University scholars warn in the most recent issue of the prestigious science journal Nature that using such synthetic data to train gen AI can drastically degrade the accuracy of the models, to the point of making them useless.

In the paper, lead author Ilia Shumailov and his team describe what they call "model collapse," and how it becomes worse each time models feed the next model with fake data.

Also: Google's DeepMind AI takes home silver medal in complex math competition

"Model collapse is a degenerative process affecting generations of learned generative models, in which the data they generate end up polluting the training set of the next generation," Shumailov's team wrote. "Being trained on polluted data, they then mis-perceive reality."

Specifically, the models lose track of the less-common facts over generations, becoming more and more generic. As they do so, the answers they produce become totally irrelevant to the questions they ask, turning into effectively gibberish. "Models start forgetting improbable events over time, as the model becomes poisoned with its own projection of reality," they write.

The authors wrote that the findings "must be taken seriously," as gen AI risks a compounding process of deterioration the more that the internet is flooded with the output of AI models that then gets re-used. "The use of LLMs at scale to publish content on the internet will pollute the collection of data to train their successors: data about human interactions with LLMs will be increasingly valuable," they wrote.

Also: OpenAI offers GPT-4o mini to slash the cost of applications

To arrive at that conclusion, the authors conducted an experiment using Meta's open-source AI model, OPT, for "open pre-trained transformer," introduced in 2022. It is similar in structure to OpenAI's GPT-3, but much smaller, with only 125 million neural parameters, or "weights."

Shumailov's team used the Wikitext2 dataset of Wikipedia articles to "fine-tune" OPT, meaning, to re-train it with additional data, a very common practice in gen AI. The authors then used the fine-tuned OPT to in turn generate synthetic copies of the Wikitext data, and they fed that new, fake data to the next fine-tuning operation, a kind of cannibalistic use of the output of one model as the input of another.

The authors provided examples of what happens after five rounds of using each fine-tuned model as the source for teaching the next: by generation five, it's complete gibberish. At the same time, they wrote, specific errors of fact became more common with each generation: "We find that, over the generations, models [...] start introducing their own improbable sequences, that is, errors."

Reflecting on what can be done to avoid model collapse, the authors ended their paper on an ominous note. It's essential to preserve the original, human-created training data, and to also have continued access to new human-created data, but doing so becomes harder as synthetic data from gen AI fills up more and more of the internet, creating a kind of lost internet of the past.

They warned, "It may become increasingly difficult to train newer versions of LLMs without access to data that were crawled from the internet before the mass adoption of the technology or direct access to data generated by humans at scale."

The editors of the magazine summed up the problem perhaps most succinctly with the old data science adage they placed on the cover: "garbage in, garbage out."

Read more

View original article

July 29, 2024 4:12:34PM

ZDNet

Has the AI boom plateaued?

First, we are quickly running out of data required to train newer models. The voracious appetite of existing models has already vacuumed up most of the publicly available text on the open internet and it is estimated that the entire lot of high-quality textual data online will be used up by 2028, creating a so-called "data wall", that might be difficult to breach. While there are still abundant sources of video, audio and images that can be tapped into, these are far more difficult to use for training than text and are subject to greater intellectual property protections.

Second, the quality of the public data that remains available is a matter of serious concern with much of the high-quality data already accounted for, leaving only subpar sources which necessitates spending more time on "cleaning" available data to make it fit for consumption. Good quality data is an absolute necessity for training quality LLMs. Finding new, untapped sources of such data will be a difficult task, forcing developers and companies to look at improving the quality of existing data sets and drawing more utility from them.

A sourcing issue

Third, any further expansion of data for LLMs would need to come from two sources. Either proprietary data protected by intellectual property, or from "synthetic data" generated by AI systems themselves. Access to proprietary data can legally only be acquired on a case-by-case basis, subject to specific agreements with the data owners, which reduces the speed of said access. While there has already been some movement on this front with institutions like The New York Times signing data access agreements with OpenAI for their content, how much this model can be replicated globally is up for debate. Synthetic data, on the other hand, could provide a more ready alternative to publicly available online sources, by training AI models on data produced by other AI systems, thereby bypassing the "data wall" problem.

The growing interest in the use of synthetic data and its theoretical possibilities has led firms like Nvidia and IBM to launch their own synthetic data generators. There is, however, a great deal of scepticism about the efficacy of such data compared to human-generated sources. A recent study in Nature magazine suggests that models trained on synthetic data are more likely to "hallucinate" or produce nonsensical outputs because it amplifies mistakes made in previous generations of the trainings, leading to AI models "collapsing" on themselves. AI offerings trained on synthetic data are therefore likely to have less commercial uptake when compared to models that are trained on human-generated data.

Fourth, an honest assessment would show that despite the hype over the last year and a half, AI adoption has been relatively slow and narrow, creating doubts about the short-term commercial viability of prevalent models. The most recent quarterly financial report from Alphabet for example shows that the cost of training and deploying their respective models has far outweighed any immediate commercial returns, leading to a hammering of its share price after 18 months of giddy ascent. Odds are, other AI-focused Big Tech firms like Meta and Microsoft are likely to show similar results. In essence, the majority of the world still does not know what to do with existing AI models, forget about trying any new ones.

Still a niche

Beyond a small niche, AI as a product has not really taken off as expected. Big Tech's focus now is on creating more use-cases for existing AI, either by integrating it at the backend within existing products the way Apple is, or cannibalising existing rivals' businesses like OpenAI is attempting to do with SearchGPT.

The task, for now

The primary question the AI and tech community faces at this point is not technological but commercial -- does good technology translate into good business? After all, developing a technology and commercialising it are two very different problems. Will AI change the world? Undoubtedly, but this will take time and will be subject to the very real problems of capital, regulations, and market forces. There is also significant work that remains to be done on the physical aspects of the AI ecosystem, such as designing more efficient and environmentally positive data centres, creating more robust chip supply lines, and building new power plants, amongst a host of other issues.

Therefore, until significant AI-attributable revenues start coming in, AI and tech companies will be forced to focus on the incremental tinkering needed to make existing models better, sustainable, and importantly, more commercially viable. This phase of consolidation is not only good but is also required to set the necessary groundwork for the next generation of frontier models to be more easily accepted globally.

Read more

View original article

July 29, 2024 12:38:09PM

The Indian Express

The data that powers AI is disappearing fast

SAN FRANCISCO -- For years, the people building powerful artificial intelligence systems have used enormous troves of text, images and videos pulled from the internet to train their models.

Now that data is drying up.

Over the past year, many of the most important web sources used for training AI models have restricted the use of their data, according to a study published last week by the Data Provenance Initiative, a Massachusetts Institute of Technology-led research group.

The study, which looked at 14,000 web domains that are included in three commonly used AI training data sets, discovered an "emerging crisis in consent," as publishers and online platforms have taken steps to prevent their data from being harvested.

The researchers estimate that in the three data sets -- called C4, RefinedWeb and Dolma -- 5% of all data, and 25% of data from the highest-quality sources, has been restricted. Those restrictions are set up through the Robots Exclusion Protocol, a decades-old method for website owners to prevent automated bots from crawling their pages using a file called http://robots.txt.

The study also found that as much as 45% of the data in one set, C4, had been restricted by websites' terms of service.

"We're seeing a rapid decline in consent to use data across the web that will have ramifications not just for AI companies, but for researchers, academics and noncommercial entities," Shayne Longpre, the study's lead author, said in an interview.

Data is the main ingredient in today's generative AI systems, which are fed billions of examples of text, images and videos. Much of that data is scraped from public websites by researchers and compiled in large data sets, which can be downloaded and freely used, or supplemented with data from other sources.

Learning from that data is what allows generative AI tools like OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude to write, code and generate images and videos. The more high-quality data is fed into these models, the better their outputs generally are.

For years, AI developers were able to gather data fairly easily. But the generative AI boom of the past few years has led to tensions with the owners of that data -- many of whom have misgivings about being used as AI training fodder or at least want to be paid for it.

As the backlash has grown, some publishers have set up paywalls or changed their terms of service to limit the use of their data for AI training. Others have blocked the automated web crawlers used by companies like OpenAI, Anthropic and Google.

Sites like Reddit and StackOverflow have begun charging AI companies for access to data, and a few publishers have taken legal action -- including The New York Times, which sued OpenAI and Microsoft for copyright infringement last year, alleging that the companies used news articles to train their models without permission.

Companies like OpenAI, Google and Meta have gone to extreme lengths in recent years to gather more data to improve their systems, including transcribing YouTube videos and bending their own data policies.

More recently, some AI companies have struck deals with publishers including The Associated Press and News Corp., the owner of The Wall Street Journal, giving them ongoing access to their content.

But widespread data restrictions may pose a threat to AI companies, which need a steady supply of high-quality data to keep their models fresh and up to date.

They could also spell trouble for smaller AI outfits and academic researchers who rely on public data sets and can't afford to license data directly from publishers. Common Crawl, one such data set that comprises billions of pages of web content and is maintained by a nonprofit, has been cited in more than 10,000 academic studies, Longpre said.

It's not clear which popular AI products have been trained on these sources, since few developers disclose the full list of data they use. But data sets derived from Common Crawl, including C4 (which stands for Colossal, Cleaned Crawled Corpus) have been used by companies including Google and OpenAI to train previous versions of their models. Spokespeople for Google and OpenAI declined to comment.

Yacine Jernite, a machine-learning researcher at Hugging Face, a company that provides tools and data to AI developers, characterized the consent crisis as a natural response to the AI industry's aggressive data-gathering practices.

"Unsurprisingly, we're seeing blowback from data creators after the text, images and videos they've shared online are used to develop commercial systems that sometimes directly threaten their livelihoods," he said.

But he cautioned that if all AI training data needed to be obtained through licensing deals, it would exclude "researchers and civil society from participating in the governance of the technology."

Stella Biderman, the executive director of EleutherAI, a nonprofit AI research organization, echoed those fears.

"Major tech companies already have all of the data," she said. "Changing the license on the data doesn't retroactively revoke that permission, and the primary impact is on later-arriving actors, who are typically either smaller startups or researchers."

AI companies have claimed that their use of public web data is legally protected under fair use. But gathering new data has gotten trickier. Some AI executives I've spoken to worry about hitting the "data wall" -- their term for the point at which all of the training data on the public internet has been exhausted, and the rest has been hidden behind paywalls, blocked by http://robots.txt or locked up in exclusive deals.

Some companies believe they can scale the data wall by using synthetic data -- that is, data that is itself generated by AI systems -- to train their models. But many researchers doubt that today's AI systems are capable of generating enough high-quality synthetic data to replace the human-created data they're losing.

Another challenge is that while publishers can try to stop AI companies from scraping their data by placing restrictions in their http://robots.txt files, those requests aren't legally binding, and compliance is voluntary. (Think of it like a "no trespassing" sign for data, but one without the force of law.)

Major search engines honor these opt-out requests, and several leading AI companies, including OpenAI and Anthropic, have said publicly that they do, too. But other companies, including the AI-powered search engine Perplexity, have been accused of ignoring them. Perplexity's CEO, Aravind Srinivas, said that the company respects publishers' data restrictions. He added that while the company once worked with third-party web crawlers that did not always follow the Robots Exclusion Protocol, it had "made adjustments with our providers to ensure that they follow http://robots.txt when crawling on Perplexity's behalf."

Longpre said that one of the big takeaways from the study is that we need new tools to give website owners more precise ways to control the use of their data. Some sites might object to AI giants using their data to train chatbots for a profit but might be willing to let a nonprofit or educational institution use the same data, he said. Right now, there's no good way for them to distinguish between those uses, or block one while allowing the other.

But there's also a lesson here for big AI companies, who have treated the internet as an all-you-can-eat data buffet for years, without giving the owners of that data much of value in return. Eventually, if you take advantage of the web, the web will start shutting its doors.

Read more

View original article

July 29, 2024 12:00:02PM

The Seattle Times

The Data That Powers A.I. Is Disappearing Fast

For years, the people building powerful artificial intelligence systems have used enormous troves of text, images and videos pulled from the internet to train their models.

Now, that data is drying up.

Over the past year, many of the most important web sources used for training A.I. models have restricted the use of their data, according to a study published this week by the Data Provenance Initiative, an M.I.T.-led research group.

The study, which looked at 14,000 web domains that are included in three commonly used A.I. training data sets, discovered an "emerging crisis in consent," as publishers and online platforms have taken steps to prevent their data from being harvested.

The researchers estimate that in the three data sets -- called C4, RefinedWeb and Dolma -- 5 percent of all data, and 25 percent of data from the highest-quality sources, has been restricted. Those restrictions are set up through the Robots Exclusion Protocol, a decades-old method for website owners to prevent automated bots from crawling their pages using a file called http://robots.txt.

The study also found that as much as 45 percent of the data in one set, C4, had been restricted by websites' terms of service.

"We're seeing a rapid decline in consent to use data across the web that will have ramifications not just for A.I. companies, but for researchers, academics and noncommercial entities," said Shayne Longpre, the study's lead author, in an interview.

Read more

View original article

July 26, 2024 8:44:03PM

The New York Times

2 months ago

Luka

TechnologyIntranetStandards and ProtocolsEnterprise Information Integration

AVA AI: Aboitiz Data Innovation's new GenAI platform for enterprises - Back End News

Aboitiz Data Innovation Launches AVA AI: Leading the Charge in Enterprise-Grade GenAI Innovation

July 19, 2024 9:05:52AM

mindanaoexaminernewspaper.blogspot.com

MANILA - Aboitiz Data Innovation (ADI), the Data Science and Artificial Intelligence (DSAI) arm of the Aboitiz Group, recently announced the launch of AVA AI, its Generative AI (GenAI) Platform designed to empower enterprises across the public and private sectors. Built with enhancing productivity in mind, AVA AI is able to seamlessly integrate into existing systems and handle complex workflows, allowing teams to gain access to constructive insights from their organisations' collective data securely and efficiently.

Read more

View original article

July 19, 2024 9:05:52AM

mindanaoexaminernewspaper.blogspot.com

Similar articles from other sources

Show(4)

AVA AI: Aboitiz Data Innovation's new GenAI platform for enterprises - Back End News

Aboitiz Data Innovation (ADI), the data science and artificial intelligence (DSAI) arm of the Aboitiz Group, has announced the launch of AVA AI, a Generative AI (GenAI) platform that promises to enhance productivity for enterprises in both the public and private sectors.

AVA AI is designed to integrate seamlessly with existing systems, handle complex workflows, and provide secure and efficient access to insightful data.

Enterprises often struggle with data spread across multiple sources, leading to missed insights and hindered decision-making. GenAI platforms that rely solely on publicly available data can generate irrelevant or inaccurate information, posing security risks. AVA AI addresses these challenges by consolidating data and offering actionable insights, potentially improving organizational efficiency and decision-making.

"In line with our mission to make AI work for our clients and their stakeholders, we designed AVA AI with the sole purpose in mind of simplifying how enterprises interact with their data," said Dr. David Hardoon, CEO of ADI. "By streamlining processes and enhancing workflows, we enable them to not only make informed decisions swiftly and efficiently, but also to uncover valuable insights."

AVA AI features an enterprise-ready chat interface that allows users to interact with backend applications using natural language. It can extract information from internal sources, search the web when necessary, and integrate with existing systems securely. This enables teams to have context-aware conversations and generate crucial insights without the risk of misinformation.

The platform ensures data security through robust encryption and granular access controls, complying with industry standards to mitigate external threats and regulatory risks. Internally, role-based access control restricts access to sensitive data to specific users. Audit trails record all platform interactions, enhancing transparency and compliance, while usage analytics support decision-making and workflow optimization.

Enterprises can customize AVA AI agents -- programs capable of specialized tasks such as document parsing, accessing primary information sources, and integrating with enterprise APIs. This capability allows AVA AI to handle complex operational tasks using the enterprise's private data and Retrieval Augmented Generation (RAG) technology to provide evidence-backed answers, eliminating irrelevant or inaccurate input.

ADI's cross-functional teams, including data engineers, data scientists, full-stack developers, and DevSecOps specialists, ensure the best practices throughout the platform's implementation and customization. Their expertise accelerates the operationalization of AVA AI according to enterprise needs and goals.

To illustrate, a multilateral development bank utilizes AVA AI to manage multiple workstreams, tailoring the platform to meet each department's needs. AVA AI offers personalized capabilities such as access to internal and internet documents, support from multiple large language models (for example, OpenAI GPT-4, Anthropic Claude 3, Meta Llma 3), quick summaries, and sector-specific agents for domain-specific queries.

ADI's cross-functional teams, including data engineers, data scientists, full-stack developers, and DevSecOps specialists, ensure the best practices throughout the platform's implementation and customization. Their expertise accelerates the operationalization of AVA AI according to enterprise needs and goals.

AVA AI is now available to enterprises across Southeast Asia, integrating with popular productivity suites like Google Workspace and Microsoft's SharePoint and Office 365. It is accessible through web browsers and communications platforms such as Slack and Microsoft Teams.

Read more

View original article

July 19, 2024 4:41:54AM

Back End News

Aboitiz Data Innovation launches AVA AI, leading the charge in enterprise-grade GenAI innovation

Aboitiz Data Innovation (ADI), the Data Science and Artificial Intelligence (DSAI) arm of the Aboitiz Group, recently announced the launch of AVA AI, its Generative AI (GenAI) Platform designed to empower enterprises across the public and private sectors. Built to enhance productivity, AVA AI seamlessly integrates into existing systems and handles complex workflows, allowing teams to gain access to constructive insights from their organizations' collective data securely and efficiently.

Many enterprises today face challenges with data dispersed across multiple sources and systems, leading to missed crucial insights and hampered strategic decision-making. Moreover, relying on GenAI platforms that depend exclusively on publicly available information is prone to generating irrelevant or inaccurate answers and increasing security risks for enterprises. These inefficiencies can stifle team productivity and overall operational effectiveness. AVA AI addresses these issues by consolidating data, providing actionable insights, and boosting efficiency and decision-making within organizations.

"In line with our mission to make AI work for our clients and their stakeholders, we designed AVA AI with the sole purpose in mind to simplify how enterprises interact with their data," shared Dr. David R. Hardoon, ADI chief executive officer. "By streamlining processes and enhancing workflows, we enable them to not only make informed decisions swiftly and efficiently, but also to uncover valuable insights."

At its core, AVA AI features an enterprise-ready chat interface that allows users to interact with backend applications using natural language on-premise or through cloud, breaking down data barriers. It can intelligently extract information from internal data sources, search the web if needed, and seamlessly integrate with existing systems and workflows securely. This enables teams to have smart, context-aware conversations to generate crucial insights from enterprise data without the risk of misinformation.

AVA AI ensures data security through robust encryption and granular access controls that comply with industry standards to mitigate external threats and regulatory risks. Data is also protected internally, with role-based access control managing access to sensitive data only to specific users. Additionally, audit trails record all platform interactions, enhancing transparency and compliance with relevant regulations, and the availability of usage analytics empowers decision-making and further optimizes workflows.

Enterprises can customize AVA AI agents -- in-built programs that can perform specialized skill sets such as document parsing, accessing primary information sources, and integrating with enterprise APIs. This enables it to handle complex operational tasks such as research, document extraction, content generation, and analytics. It uses the enterprise's private data and Retrieval Augmented Generation (RAG) technology to deliver evidence-backed answers, eliminating factually irrelevant or inaccurate input.

For example, a multilateral development bank utilizes AVA AI to phase in multiple workstreams, ensuring the needs of each department are met through customized agents. AVA AI offers personalized capabilities such as access to internal and internet documents for references, support from multiple large language models (i.e. OpenAI GPT-4, Anthropic Claude 3, Meta Llma 3), ensuring fair and trustworthy answers, synthesis of inputs from three models, the ability to open source document links, quick summaries, and the option to deploy sector agents for domain-specific queries.

ADI proudly boasts cross-functional teams comprising data engineers, data scientists, full-stack developers, and DevSecOps specialists. These teams ensure best practices are upheld throughout implementation. Their proven domain expertise accelerates the platform's operationalization and customization according to enterprise needs and goals.

"While GenAI has shown much promise in consumer use cases, AVA AI distinguishes itself with the unparalleled ability for personalized applications across enterprises and functions," said Nicolas Paris, ADI's chief data & technology officer. "With its advanced features and possibilities, AVA AI is poised to unlock new realms of efficiency, innovation, and strategic insights for enterprises with diverse business needs."

AVA AI is now available to enterprises across Southeast Asia. It integrates easily with popular productivity suites like Google Workspace, Microsoft's SharePoint, and Office 365. It can also be easily accessed through web browsers and communications platforms such as Slack and Microsoft Teams.

Hailing from the Philippines' Aboitiz tech glomerate and headquartered in Singapore, ADI offers data science and AI products and capabilities tailored for the financial services, industrials, and public sectors. AVA AI is the latest in ADI's product lineup, which encompasses Alternative Scoring, Transaction Monitoring, Customer Intelligence, and AI for Climate/ESG. Additionally, its capabilities extend to GenAI and data management and architecture.

To support its mission to make AI work for businesses in diverse sectors across the region, ADI is actively forging strategic partnerships across the industry. ADI had recently teamed up with Cloudera to empower Asia Pacific's financial services and industrial sectors with GenAI capabilities. The DSAI startup is also an official Amazon Web Services (AWS) partner for Select Tier Services and the Public Sector, boosting its ability to deliver AI-powered solutions.

Read more

View original article

July 18, 2024 10:52:53AM

Gadgets Magazine

Aboitiz Data Innovation launches AVA AI

Built with enhancing productivity in mind, AVA AI is able to seamlessly integrate into existing systems and handle complex workflows, allowing teams to gain access to constructive insights from their organizations' collective data securely and efficiently.

Aboitiz Data Innovation (ADI), the Data Science and Artificial Intelligence (DSAI) arm of the Aboitiz Group, recently announced the launch of AVA AI, its Generative AI (GenAI) Platform designed to empower enterprises across the public and private sectors.

Built with enhancing productivity in mind, AVA AI is able to seamlessly integrate into existing systems and handle complex workflows, allowing teams to gain access to constructive insights from their organizations' collective data securely and efficiently.

Many enterprises today face challenges with data dispersed across multiple sources and systems, leading to missed crucial insights and hampered strategic decision-making. Moreover, relying on GenAI platforms that depend exclusively on publicly-available information is prone to generating irrelevant or inaccurate answers and increasing security risks for enterprises. These inefficiencies can stifle team productivity and overall operational effectiveness. AVA AI addresses these issues by consolidating data, providing actionable insights, and boosting efficiency and decision-making within organizations.

"In line with our mission to make AI work for our clients and their stakeholders, we designed AVA AI with the sole purpose in mind to simplify how enterprises interact with their data," shared Dr. David R. Hardoon, ADI Chief Executive Officer. "By streamlining processes and enhancing workflows, we enable them to not only make informed decisions swiftly and efficiently, but also to uncover valuable insights."

At its core, AVA AI features an enterprise-ready chat interface that allows users to interact with backend applications using natural language on-premise or through cloud, breaking down data barriers. It can intelligently extract information from internal data sources, search the web if needed, and seamlessly integrate with existing systems and workflows securely. This enables teams to have smart, context-aware conversations to generate crucial insights from enterprise data without the risk of misinformation.

AVA AI ensures data security through robust encryption and granular access controls that are in compliance with industry standards to mitigate external threats and regulatory risks. Data is also protected internally, with role-based access control managing access to sensitive data only to specific users. Additionally, audit trails record all platform interactions, enhancing transparency and compliance with relevant regulations, and the availability of usage analytics empowers decision-making and further optimizes workflows.

Enterprises can customize AVA AI agents - in-built programs that can perform specialized skill sets such as document parsing, accessing primary information sources, and integrating with enterprise APIs. This enables it to handle complex operational tasks such as research, document extraction, content generation, and analytics using enterprise's private data and Retrieval Augmented Generation (RAG) technology to deliver evidence-backed answers, eliminating factually irrelevant or inaccurate input.

For example, a multilateral development bank utilizes AVA AI to phase in multiple workstreams, ensuring the needs of each department were met through customized agents. AVA AI offers personalized capabilities such as access to internal and internet documents for references, support from multiple large language models (i.e. OpenAI GPT-4, Anthropic Claude 3, Meta Llma 3) ensuring fair and trustworthy answers, synthesis of inputs from three models, the ability to open source document links, quick summaries, and the option to deploy sector agents for domain-specific queries.

ADI proudly boasts cross-functional teams comprising data engineers, data scientists, full-stack developers, and DevSecOps specialists, ensuring best practices are upheld throughout implementation. Their proven domain expertise accelerates the operationalization and customization of the platform according to enterprise needs and goals.

"While GenAI has shown much promise in consumer use cases, AVA AI distinguishes itself with unparalleled ability for personalized applications across enterprises and functions," said Nicolas Paris, ADI Chief Data & Technology Officer. "With its advanced features and possibilities, AVA AI is poised to unlock new realms of efficiency, innovation, and strategic insights for enterprises with diverse business needs."

AVA AI is now available to enterprises across Southeast Asia. It integrates easily with popular productivity suites like Google Workspace and Microsoft's SharePoint and Office 365. It can also be easily accessed through web browsers and communications platforms such as Slack and Microsoft Teams.

Hailing from the Philippines' Aboitiz techglomerate and headquartered in Singapore, ADI offers data science and AI products and capabilities tailored for the financial services, industrials, and public sectors. AVA AI is the latest in ADI's product lineup, which encompasses Alternative Scoring, Transaction Monitoring, Customer Intelligence, AI for Climate/ESG. Additionally, its capabilities extend to GenAI, and data management & architecture.

To support its mission to make AI work for businesses in diverse sectors across the region, ADI is actively forging strategic partnerships across the industry. ADI had recently teamed up with Cloudera to empower Asia Pacific's financial services and industrial sectors with GenAI capabilities. The DSAI startup is also an official Amazon Web Services (AWS) partner for Select Tier Services and the Public Sector, boosting its ability to deliver AI-powered solutions.

Read more

View original article

July 17, 2024 5:21:46AM

Upgrade Magazine

Aboitiz Data Innovation launches AVA AI to lead in enterprise-grade GenAI innovation

Genshin Impact arrives at Resorts World Sentosa's S.E.A. Aquarium

Aboitiz Data Innovation (ADI), a Data Science and Artificial Intelligence (DSAI) startup based in Singapore, today unveiled AVA AI, its new Generative AI (GenAI) Platform. This platform is engineered to boost productivity by integrating smoothly into existing systems and handling complex workflows. It allows teams to securely and efficiently access valuable insights from their organisations' collective data.

Addressing the challenges of enterprise data with AVA AI

Enterprises currently face significant challenges due to data being spread across various sources and systems, resulting in missed critical insights and ineffective strategic decision-making. Traditional GenAI platforms that rely solely on publicly available information tend to produce irrelevant or inaccurate responses, increasing security risks for businesses. AVA AI tackles these issues by consolidating data, offering actionable insights, and enhancing organisational efficiency and decision-making.

"In line with our mission to make AI work for our clients and their stakeholders, we designed AVA AI with the sole purpose in mind to simplify how enterprises interact with their data," said Dr. David R. Hardoon, Chief Executive Officer of ADI. "By streamlining processes and enhancing workflows, we enable them to not only make informed decisions swiftly and efficiently, but also to uncover valuable insights."

Advanced functionality and security features of AVA AI

At its core, AVA AI includes an enterprise-ready chat interface that allows users to interact with backend applications using natural language, either on-premise or through the cloud. This feature helps to break down barriers to data access and integrates securely with existing systems and workflows. As a result, teams can have intelligent, context-aware conversations to derive crucial insights from enterprise data without the risk of misinformation.

AVA AI also ensures data security through robust encryption and granular access controls that comply with industry standards, thus mitigating external threats and regulatory risks. Data is further protected internally through role-based access control, which restricts access to sensitive information to specific users. Moreover, audit trails document all interactions on the platform, enhancing transparency and compliance with relevant regulations, while the availability of usage analytics helps to optimise workflows further.

Enterprises can customise AVA AI agents to perform specialised tasks such as document parsing, accessing primary information sources, and integrating with enterprise APIs. These agents can handle complex operational tasks such as research, document extraction, content generation, and analytics, using the enterprise's private data and Retrieval Augmented Generation technology to deliver evidence-backed answers.

For example, a multilateral development bank uses AVA AI to introduce multiple workstreams, ensuring that each department's needs are met through customised agents. AVA AI offers personalised capabilities such as access to internal and internet documents for references, support from multiple large language models, and the ability to deploy sector-specific agents for domain-specific queries.

"AVA AI distinguishes itself with unparalleled ability for personalised applications across enterprises and functions," added Nicolas Paris, Chief Data & Technology Officer of ADI. "With its advanced features and possibilities, AVA AI is poised to unlock new realms of efficiency, innovation, and strategic insights for enterprises with diverse business needs."

AVA AI is now available to enterprises throughout Southeast Asia. It easily integrates with popular productivity tools such as Google Workspace, Microsoft SharePoint, and Office 365 and is accessible through web browsers and communications platforms like Slack and Microsoft Teams.

Based in Singapore and hailing from the Philippines' Aboitiz tech conglomerate, ADI offers data science and AI products and capabilities tailored for the financial services, industrials, and public sectors. AVA AI is the latest addition to ADI's product lineup, which also includes Alternative Scoring, Transaction Monitoring, Customer Intelligence, AI for Climate/ESG, and data management & architecture.

ADI is actively forging strategic partnerships across the industry to support its mission of making AI work for businesses in diverse sectors across the region. It has recently teamed up with Cloudera to empower Asia Pacific's financial services and industrial sectors with GenAI capabilities. The DSAI startup is also an official Amazon Web Services (AWS) partner for Select Tier Services and the Public Sector, enhancing its capacity to deliver AI-powered solutions.

Read more

View original article

July 16, 2024 8:26:17AM

Tech Edition

3 months ago

Luka

SearchingInformation RetrievalEnterprise Information IntegrationTechnology

Google Cloud Enhances Vertex AI Grounding Capabilities For Reliable AI Responses

Google Cloud enhances Vertex AI grounding capabilities for reliable AI responses - ExBulletin

June 29, 2024 8:13:08AM

ExBulletin

Google Cloud has expanded Vertex AI's grounding capabilities, significantly enhancing the platform's ability to generate more accurate and reliable AI responses. These advancements are intended to mitigate AI hallucinations and improve the overall quality of generative AI applications and agents.

One key addition is the introduction of dynamic search for grounding with Google Search, now generally available. This innovative feature enables Gemini, Google's advanced large-scale language model, to intelligently decide whether to ground a user query with Google Search or rely on its own knowledge. This approach helps balance response quality and cost-effectiveness, as grounding with Google Search incurs additional processing costs. Gemini makes this decision by understanding whether the requested information is static, slowly changing, or rapidly evolving.

For example, if you ask about the latest movies, Gemini will use Google search to get the latest information. Conversely, for general questions like "What is the capital of France?", it will provide answers from its existing knowledge base without any external evidence. This dynamic approach not only improves the accuracy of responses, but also optimizes resource usage.

Google Cloud is also introducing an "high fidelity" grounding mode that is targeted at industries such as healthcare and financial services where accuracy and reliability are paramount.

Additionally, Google will soon enable grounded models with third-party datasets, expected to be released in Q3. Working with specialist data providers such as Moodys, MSCI, Thomson Reuters and Zoominfo, Google will provide access to their datasets via Vertex AI. This capability will enable companies to integrate highly specific and reliable information into their AI models, further increasing the accuracy and relevance of the responses generated.

For enterprises looking to incorporate AI models on private data, Google Cloud offers a suite of APIs for Vertex AI Search and Retrieval Augmented Generation (RAG). These tools help enterprises create custom RAG workflows, build semantic search engines, and enhance existing search capabilities. The APIs, now generally available, provide capabilities for document parsing, embedding generation, semantic ranking, grounded answer generation, and a fact-checking service called Check Grounding.

These enhancements are part of Google Cloud's broader strategy to make generative AI more trustworthy and suitable for enterprise use. By connecting AI models to diverse, trusted sources of information like web data, enterprise documents, operational databases, and enterprise applications, Google aims to root AI in what it calls "enterprise truth."

The focus on grounding heightens industry concerns about AI hallucinations: as AI models become more complex, the risk of them producing erroneous or unreliable outputs increases. Grounding techniques such as RAG mitigate this risk by feeding models facts from external knowledge sources, improving the accuracy and reliability of their responses.

By enabling businesses to leverage both public and private data sources, Google is enabling the development of more robust and reliable AI applications across industries.

What Are The Main Benefits Of Comparing Car Insurance Quotes Online

Read more

View original article

June 29, 2024 8:13:08AM

ExBulletin

Similar articles from other sources

Show(8)

Google Cloud Enhances Vertex AI Grounding Capabilities For Reliable AI Responses

Google Cloud has expanded the grounding capabilities of Vertex AI, significantly enhancing the platform's ability to generate more accurate and reliable AI responses. These advancements aim to mitigate AI hallucinations and elevate the overall quality of generative AI applications and agents.

One of the key additions is the introduction of dynamic retrieval for Grounding with Google Search, which is now generally available. This innovative feature allows Gemini, Google's advanced large language model, to intelligently decide whether to ground user inquiries in Google Search or rely on its intrinsic knowledge. This approach helps balance response quality with cost efficiency, as grounding with Google Search incurs additional processing costs. Gemini makes this decision based on its understanding of whether the information requested is likely to be static, slowly changing, or rapidly evolving.

For example, when asked about recent movies, Gemini uses Google Search for the latest information. Conversely, for general questions like "What is the capital of France?" it provides an answer from its existing knowledge base without external grounding. This dynamic approach not only enhances response accuracy but also optimizes resource usage.

Google Cloud is also introducing a "high-fidelity" mode for grounding, currently in the experimental phase. This mode targets industries such as healthcare and financial services, where precision and reliability are paramount.

Additionally, Google will soon enable grounding models with third-party datasets, expected to launch in Q3. Collaborating with specialized data providers like Moody's, MSCI, Thomson Reuters and Zoominfo, Google will offer access to their datasets via Vertex AI. This feature will allow enterprises to integrate highly specific and authoritative information into their AI models, further boosting the accuracy and relevance of generated responses.

For enterprises aiming to ground their AI models in private data, Google Cloud provides Vertex AI Search and a suite of APIs for Retrieval Augmented Generation (RAG). These tools help businesses create custom RAG workflows, build semantic search engines, or enhance existing search capabilities. The APIs, now generally available, offer functionalities for document parsing, embedding generation, semantic ranking, grounded answer generation and a fact-checking service called check-grounding.

These enhancements are part of Google Cloud's broader strategy to make generative AI more reliable and suitable for enterprise use. By connecting AI models to diverse and reliable information sources -- including web data, company documents, operational databases and enterprise applications -- Google aims to ground AI in what it calls "enterprise truth."

Focusing on grounding addresses the growing industry concern over AI hallucinations. As AI models become more complex, the risk of producing faulty or unreliable outputs increases. Grounding techniques like RAG mitigate this risk by feeding models with facts from external knowledge sources, thus improving the accuracy and trustworthiness of responses.

By enabling enterprises to leverage both public and private data sources, Google is enabling the development of more robust and trustworthy AI applications across various industries.

Read more

View original article

June 29, 2024 7:29:56AM

Forbes

Google, Moody's and Thomson Reuters team up to provide real-world data for AI - ExBulletin

VentureBeat Transform 2024 brings together leaders from OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One. Gain key insights and network about GenAI at this exclusive three-day event. Learn more

Google is working to ensure its AI platform minimizes hallucinations as it seeks to woo enterprise customers, a big problem for companies, especially those with executives who are already wary of the technology. To reassure them, Google is focusing on grounding its models, tapping respected third-party services Moodys, MSCI, Thomson Reuters and Zoominfo, which will be available within Vertex AI starting next quarter. These companies will provide developers with qualified data to back up their model outputs to ensure responses are factually accurate.

For enterprise developers, this means they can leverage granular quality data and expert knowledge to help them meet standards. As Moody's chief product officer Nick Reed explained during a press conference this week, what we're offering here is providing our data to others so they can use it to ground their own responses and use them in their own internal or external interactions. We see the service we're providing as a way to leverage our models and our data to be able to incorporate these two, and Google Search, into assistants or applications that customers of our grounding service use.

Google already offers Google Search as part of its Grounding Services, allowing businesses to augment Gemini's output with up-to-date, high-quality information from across the web, but there is now a more reliable option for businesses looking for a source of truth and not risking inaccurate information from blogs that may be disguising their SEO as highly authoritative.

But that's not all. Google is also announcing high-fidelity grounding. Available in preview, it's designed to help AI systems work better with specific sets of information. It's useful when dealing with tasks like summarizing multiple documents simultaneously or extracting key data from financial reports. High-fidelity grounding is powered by Google's Gemini 1.5 Flash.

Countdown to VB Transform 2024

Join enterprise leaders at our flagship AI event in San Francisco July 9-11. Network with your peers, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications in your industry. Register now

When asked if the new grounding partnerships and new high-fidelity grounding indicate that Google's generative AI is the most reliable model of fact compared to competitors, Google Cloud CEO Thomas Kurian cited three key differentiators. First, Google is grounding its AI against web data, leveraging its reputation as the most trusted source of web data and Google Search's understanding of the real world. Second, the company is giving options for how it grounds its data, with specific subverticals able to control what gets grounded. Third, high-fidelity grounding improves the quality of responses by telling the AI to pay more attention to the prompt, rather than all the data the model may have been trained on.

Combining these three elements gives us the highest level of control over the quality of answers the model can produce, he said. This is our attempt to reduce illusions and increase confidence in the model.

The announcement came on the same day as other Google Cloud news, including the general availability of Gemini 1.5 Flash and 1.5 Pro, Gemma 2, and other Vertex AI updates.

VB Daily

Stay up to date! Get the latest news every day by email

By subscribing, you agree to VentureBeat's Terms of Use.

Thanks for subscribing! Check out other VB newsletters here.

An error occurred.

What Are The Main Benefits Of Comparing Car Insurance Quotes Online

Read more

View original article

June 28, 2024 7:41:57PM

ExBulletin

Google partners with Moody's to counter its AI's hallucination problems

Partnerships like these aim to build trust in AI models by ensuring they are anchored in credible, factual information. Image Credit: Reuters

Google is ramping up efforts to bolster the reliability of its AI systems by integrating real-world factual data, with a notable new partnership with Moody's to leverage financial information.

In a bid to tackle the issue of AI generating inaccurate or misleading information, Google Cloud has expanded its initiative to ground Vertex AI results in verifiable data sources, including web search and internal enterprise data. Now, the tech giant is introducing a novel approach: incorporating third-party data from partners such as Moody's, Thomson Reuters, and ZoomInfo to further enhance the accuracy of AI-generated outputs.

According to Google Cloud CEO Thomas Kurian, this partnership aims to build trust in AI models by ensuring they are anchored in credible, factual information. "You can actually trust the model to do a task on your behalf because you have a basis for trusting it," Kurian emphasized in an interview with Axios.

The initiative underscores a broader industry effort among leading AI providers to validate the safety and dependability of generative AI systems for business applications. Google's introduction of a "confidence score" feature further enhances transparency, providing users with a numeric assessment of the model's confidence in its responses.

Moreover, Google is rolling out capabilities that allow customers to instruct AI models to prioritize information from specific documents or sources, reducing reliance on generalized training data. "We've taught the model how to guarantee that when it responds, it takes what's in the input prompt as the primary information it needs to pay attention to," Kurian explained, highlighting efforts to mitigate distractions from extensive training datasets.

In addition to these enhancements, Google has announced the general availability of its low-latency Gemini 1.5 Flash model and Gemini 1.5 Pro, capable of processing extensive contexts equivalent to up to two hours of video content.

These developments mark a significant step forward in fortifying AI systems against inaccuracies and reinforcing their utility and reliability across various business sectors. As Google continues to innovate and collaborate with industry leaders like Moody's, the trajectory towards more trustworthy AI applications in enterprise settings appears promising.

Read more

View original article

June 28, 2024 4:39:05PM

Firstpost

Google Cloud's Vertex AI gets new grounding options

The new grounding features will help enterprises to reduce hallucinations across their generative AI-based apps and agents, the company says.

Google Cloud is introducing a new set of grounding options that will further enable enterprises to reduce hallucinations across their generative AI-based applications and agents.

The large language models (LLMs) that underpin these generative AI-based applications and agents may start producing faulty output or responses as they grow in complexity. These faulty outputs are termed as hallucinations as the output is not grounded in the input data.

Retrieval augmented generation (RAG) is one of several techniques used to address hallucinations: others are fine-tuning and prompt engineering. RAG grounds the LLM by feeding the model facts from an external knowledge source or repository to improve the response to a particular query.

The new set of grounding options introduced inside Google Cloud's AI and machine learning service, Vertex AI, includes dynamic retrieval, a "high-fidelity" mode, and grounding with third-party datasets, all of which can be seen as expansions of Vertex AI features unveiled at its annual Cloud Next conference in April.

Dynamic retrieval to balance between cost and accuracy

The new dynamic retrieval capability, which will be soon offered as part of Vertex AI's feature to ground LLMs in Google Search, looks to strike a balance between cost efficiency and response quality, according to Google.

As grounding LLMs in Google Search racks up additional processing costs for enterprises, dynamic retrieval allows Gemini to dynamically choose whether to ground end-user queries in Google Search or use the intrinsic knowledge of the models, Burak Gokturk, general manager of cloud AI at Google Cloud, wrote in a blog post.

The choice is left to Gemini as all queries might not need grounding, Gokturk explained, adding that Gemini's training knowledge is very capable.

Gemini, in turn, takes the decision to ground a query in Google Search by segregating any prompt or query into three categories based on how the responses could change over time -- never changing, slowly changing, and fast changing.

This means that if Gemini was asked a query about a latest movie, then it would look to ground the response in Google Search but it wouldn't ground a response to a query, such as "What is the capital of France?" as it is less likely to change and Gemini would already know the answer to it.

High-fidelity mode aimed at healthcare and financial services sectors

Google Cloud also wants to aid enterprises in grounding LLMs in their private enterprise data and to do so it showcased a collection of APIs under the name APIs for RAG as part of Vertex AI in April.

APIs for RAG, which has been made generally available, includes APIs for document parsing, embedding generation, semantic ranking, and grounded answer generation, and a fact checking service called check-grounding.

High fidelity experiment

As part of an extension to the grounded answer generation API, which uses Vertex AI Search data stores, custom data sources, and Google Search, to ground a response to a user prompt, Google is introducing an experimental grounding option, named grounding with high-fidelity mode.

The new grounding option, according to the company, is aimed at further grounding a response to a query by forcing the LLM to retrieve answers by not only understanding the context in the query but also sourcing the response from a custom provided data source.

This grounding option uses a Gemini 1.5 Flash model that has been fine-tuned to focus on a prompt's context, Gokturk explained, adding that the option provides sources attached to the sentences in the response along with grounding scores.

Grounding with high-fidelity mode currently supports key use cases such as summarization across multiple documents or data extraction against a corpus of financial data.

This grounding option, according to Gokturk, is being aimed at enterprises in the healthcare and financial services sectors as these enterprises cannot afford hallucinations and sources provided in query responses aid in building trust in the end-user-facing generative AI-based application.

Other major cloud service providers, such as AWS and Microsoft Azure, currently don't have an exact feature that matches high-fidelity mode but each of them have a system in place to evaluate the reliability of RAG applications, including the mapping of response generation metrics.

While Microsoft uses the Groundedness Detection API to check whether the text responses of large language models (LLMs) are grounded in the source materials provided by users, AWS' Amazon Bedrock service uses several metrics to do the same task.

As part of Bedrock's RAG evaluation and observability features, AWS uses metrics such as faithfulness, answer relevance, and answer semantic similarity to benchmark a query response.

The faithfulness metric measures whether the answer generated by the RAG system is faithful to the information contained in the retrieved passages, AWS said, adding that the aim is to avoid hallucinations and ensure the output is justified by the context provided as input to the RAG system.

Enabling third-party data for RAG via Vertex AI

In line with its announced plans at Cloud Next in April, the company said it is planning to introduce a new service within Vertex AI from the next quarter to allow enterprises to ground their models and AI agents with specialized third-party data.

Google said that it was already working with data providers such as Moody's, MSCI, Thomson Reuters, and Zoominfo to bring their data to this service.

Next read this:

Why companies are leaving the cloud 5 easy ways to run an LLM locally Coding with AI: Tips and best practices from developers Meet Zig: The modern alternative to C What is generative AI? Artificial intelligence that creates The best open source software of 2023

Read more

View original article

June 28, 2024 2:09:39PM

InfoWorld

Google Cloud says enterprise AI chatbots will be more "grounded" in real-world facts

Google signs deals to make its enterprise AI offerings perform better

Google is enhancing its efforts to improve the factual grounding of enterprise AI chatbots by integrating real-world financial data from business and financial services.

The plans aim to reduce inaccuracies in AI-generated information and improve overall reliability, known as 'hallucinations,' by grounding responses with verified data.

The news is part of Google's plans to include third-party data sources, with initial partners comprising Moody's, Thomson Reuters and ZoomInfo, on top of web search and internal company data.

Fairly late to the party, enterprises had initially waited for more secure and private systems to protect sensitive company data, however with more enterprises deploying AI solutions, the focus has shifted onto accuracy.

Speaking about the changes, Google Cloud CEO Thomas Kurian told Axios: You can actually trust the model to do a task on your behalf because you have a basis for trusting it."

To further enhance reliability, Google is also introducing a confidence score which provides a numeric indicator of the AI model's certainty in its answer. Enterprise users will also be able to direct the AI chatbot to prioritize information from specific documents or data included in a prompt, rather than its broder training data.

Kurian added: "We've taught the model how to guarantee that when it responds, it takes what's in the input prompt as the primary information it needs to pay attention to."

Moreover, Google is expanding Vector Search to support hybrid searches, combining vector-based images searches with text-based keyword searches, for improved accuracy. The upgrade is currently available in public preview.

Google Cloud's announcement, authored by Burak Gokturk, VP & GM for Cloud AI & Industry Solutions, concludes: "As these technologies become even more capable, we are committed to helping businesses realize the full potential of grounded generative AI in the real world."

Read more

View original article

June 28, 2024 1:35:02PM

TechRadar

Google to partner with Moody's, Thomson Reuters to give AI more real world data

To ensure AI apps and agents have more reliable and accurate data, Google has announced a couple of partnerships. According to a company blog post, Google is partnering with Moody's, MSCI, Thomson Reuters and Zoominfo.

Google's Vertex AI will use third-party data from Moody's, Thomson Reuters for better AI results and responses. Developers will have access to use this data to ensure that responses delivered are more accurate and there are less 'hallucinations'.

Vertex AI will offer a new service that will let customers ground their models and AI agents with specialised third-party data, said Google.

"This will help enterprises integrate third-party data into their generative AI agents to unlock unique use cases, and drive greater enterprise truth across their AI experiences," said Burak Gokturk, VP & GM, cloud AI & industry solutions, Google Cloud.

With some enterprises still skeptical about deploying AI agents, Google wants to give them the assurance of AI models being integrated with third-party data from the likes of Moody's and Thomson Reuters.

Other new AI features for enterprises

Google also announced several new features -- which it is calling grounding capabilities -- to help enterprise customers build more capable agents and apps. There's Grounding with Google Search, which will offer dynamic retrieval, a new capability to help customers balance quality with cost efficiency by intelligently selecting when to use Google Search results and when to use the model's training data.

There is also grounding with high-fidelity mode, which uses a Gemini 1.5 Flash model that has been fine-tuned to focus on customer-provided context to generate answers. The service supports key enterprise use cases such as summarisation across multiple documents or data extraction against a corpus of financial data. This results in higher levels of factuality, and a reduction in hallucinations.

"When high-fidelity mode is enabled, sentences in the answer have sources attached to them, providing support for the stated claims. Grounding confidence scores are also provided," said Gokturk in a blog post.

Read more

View original article

June 28, 2024 8:43:46AM

Moneycontrol

Google partners with Thomson Reuters, Moody's and more to give AI real-world data

Don't miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three day event. Learn More

As it seeks to win over enterprise customers, Google is trying to ensure that its AI platform minimizes hallucinations. It's a big deal for organizations, especially those whose executives are already wary about the technology. Google is doubling down on model grounding to reassure them, turning to reputable third-party services Moody's, MSCI, Thomson Reuters and Zoominfo. These four will be available within Vertex AI starting next quarter. They will offer developers qualified data to backstop their model outputs against to ensure responses are factually accurate.

What it means for enterprise developers is that they can now leverage the fine-tuned quality data and expertise of Subject Matter Experts to ensure it meets their standards. As Nick Reed, Moody's chief product officer, explains during a press briefing this week, "What's on offer here is our data being made available to other people so that they can use it to ground their own responses that they might use either internally or externally for their own interactions. We think the service offering from Google is a way of being able to leverage their models and our data to be able to put both of those things, as well as Google Search, into an assistant or an application that the customers of the grounding service would use."

Google already offers Google Search as part of its grounding service, allowing businesses to augment Gemini outputs with "fresh and high-quality information" from the web. But there are now more trustworthy options for companies who want source-of-truth information and do not risk having inaccurate information from a blog that may have gamed SEO to be highly authoritative.

But that's not all, as Google is also announcing high-fidelity grounding. Available through an experimental preview, it's designed to help AI systems work better with a given set of specific information. It is useful when dealing with tasks like summarizing multiple documents simultaneously or extracting important data from financial reports. Grounded with high-fidelity is powered by Google's Gemini 1.5 Flash.

When asked if the new grounding partnerships and the new grounding with high-fidelity signaled Google's generative AI as being the "most reliable factual model out there compared to competitors," Google Cloud Chief Executive Thomas Kurian cites three key differentiators: First, Google is using its reputation as having the "most trusted source of web data and real-world understanding" with Google Search to help ground AI against web data. Second, the company provides options on how data should be grounded in specific sub-verticals -- "you control what you want to ground against." And third, high-fidelity grounding directs AI to pay more attention to what is being prompted, not all the data that the model may have been trained on, improving the quality of the response.

"All three of these elements combined give you the highest degree of control on the quality of the answers that a model can generate," he remarks. "And it's our attempt at reducing hallucination and improving trust in models.

Read more

View original article

June 27, 2024 6:39:59PM

VentureBeat

Google partners with Thomson Reuters, Moody's and more to give AI real-world data - RocketNews

Don't miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three day event. Learn More

As it seeks to win over enterprise customers, Google is trying to ensure that its AI platform minimizes hallucinations. It's a big deal for organizations, especially those whose executives are already wary about the technology. Google is doubling down on model grounding to reassure them, turning to reputable third-party services Moody's, MSCI, Thomson Reuters and Zoominfo. These four will be available within Vertex AI starting next quarter. They will offer developers qualified data to backstop their model outputs against to ensure responses are factually accurate.

What it means for enterprise developers is that they can now leverage the fine-tuned quality data and expertise of Subject Matter Experts to ensure it meets their standards. As Nick Reed, Moody's chief product officer, explains during a press briefing this week, "What's on offer here is our data being made available to other people so that they can use it to ground their own responses that they might use either internally or externally for their own interactions. We think the service offering from Google is a way of being able to leverage their models and our data to be able to put both of those things, as well as Google Search, into an assistant or an application that the customers of the grounding service would use."

Google a ...

Read more

View original article

June 27, 2024 7:02:21PM

RocketNews

Enterprise Information Integration (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Aron Pacocha

Last Updated:

Views: 6372

Rating: 4.8 / 5 (68 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Aron Pacocha

Birthday: 1999-08-12

Address: 3808 Moen Corner, Gorczanyport, FL 67364-2074

Phone: +393457723392

Job: Retail Consultant

Hobby: Jewelry making, Cooking, Gaming, Reading, Juggling, Cabaret, Origami

Introduction: My name is Aron Pacocha, I am a happy, tasty, innocent, proud, talented, courageous, magnificent person who loves writing and wants to share my knowledge and understanding with you.