Tom Hollander

20 results

AI-powered SQL Query Converter Tool is Now Available in Relational Migrator

When I traveled to Japan for the first time it was shortly after translation apps on smartphones had really taken off. Even though I knew enough phrases to get by as a tourist I was amazed at how empowered I was by being able to have smoother conversations and read signs more easily. The power of AI helped me understand a language I had only a passing familiarity with and drastically improved my experience in another country. I was able to spend more time enjoying myself and spend less time looking up common words and sentences in a phrase book. So what does this have to do with application modernization? Transitioning from relational databases as part of a modernization effort is more than migrating data from a legacy database to a modern one. There is all the planning, designing, testing, refactoring, validating, and ongoing operation that makes modernization efforts a complex project to navigate successfully. MongoDB’s free Relational Migrator tool has helped with many of these tasks including schema design, data migration, and code generation, but we know this is just the beginning. One of the most common challenges of migrating legacy applications to MongoDB is working with SQL queries, triggers, and stored procedures that are often undocumented and must be manually converted to MongoDB Query API syntax. This requires deep knowledge of both SQL and the MongoDB Query API, which is rare if teams are used to only using one system or the other. In addition, teams often have hundreds, if not thousands of queries, triggers, and stored procedures that must be converted, which is extremely time-consuming and tedious. Doing these conversions manually would be like traveling abroad and looking up each object one by one in a phrase book instead of using a translation app. Thankfully with generative AI, we are finally able to get the modern version of the translation app on your phone. The latest release of Relational Migrator is able to use generative AI to help your developers quickly convert existing SQL queries, triggers, and stored procedures to work with MongoDB using your choice of programming language (JavaScript, C#, or Java). By automating the generation of development-ready MongoDB queries, your team can be more efficient by redirecting their time to more important testing and optimization efforts — accelerating your migration project. Teams that are familiar with SQL can also use the Query Converter to help close their MongoDB knowledge gap. The SQL objects they're familiar with are translated, making it easier to learn the new syntax by seeing them next to each other. Let’s take a closer look at how Query Converter can convert a SQL Server stored procedure to work with MongoDB. Figure 1: The MongoDB Query Converter Dashboard We’ll start by importing the stored procedure from the relational database into our Relational Migrator project. This particular stored procedure joins the results from two tables, performs some arithmetic on some of the columns, and filters the results based on an input parameter. CREATE PROCEDURE CustOrdersDetail @OrderID int AS SELECT ProductName, UnitPrice=ROUND(Od.UnitPrice, 2), Quantity, Discount=CONVERT(int, Discount * 100), ExtendedPrice=ROUND(CONVERT(money, Quantity * (1 - Discount) * Od.UnitPrice), 2) FROM Products P, [Order Details] Od WHERE Od.ProductID = P.ProductID and Od.OrderID = @OrderID Developers who are experienced with the MongoDB aggregation framework would know that the equivalent method to join data from two collections is to use the $lookup stage. However, when migrating a relational database to MongoDB, it often makes sense to consolidate data from multiple tables into a single collection. In this example, we are doing exactly that, by combining data from the Orders , Order Details , and Products table into a single orders collection. This means that, when considering the changes to the schema, we do not actually need a $lookup stage at all, as the data from each of the required tables has already been merged into a single collection. Relational Migrator’s Query Converter works alongside the schema mapping functionality and automatically adjusts the generated query to work against your chosen schema. With JavaScript chosen as our target language, the converted query avoids the need for a costly join and includes MongoDB equivalents of our original SQL arithmetic functions. The query is now ready to test and include in our modernized app. const CustOrdersDetail = async (db, OrderID) => { return await db.collection('orders').aggregate([ { $match: { orderId: OrderID } }, { $unwind: '$lineItems' }, { $project: { ProductName: '$product.productName', UnitPrice: { $round: ['$lineItems.unitPrice', 2] }, Quantity: '$lineItems.quantity', Discount: { $multiply: ['$lineItems.discount', 100] }, ExtendedPrice: { $round: [ { $multiply: [ '$lineItems.quantity', { $subtract: [1, '$lineItems.discount'] }, '$lineItems.unitPrice' ] }, 2 ] } } } ]).toArray(); }; Relational Migrator does more than just query conversion, it also assists with app code generation, data modeling, and data migration, which drastically cuts down on the time and effort required to modernize your team's applications. Just like a language translation app while traveling abroad it can drastically improve your experience converting and understanding a new language or technology. The new Query Converter tool is now available for free for anyone to try as part of a public preview in the Relational Migrator tool. Download Relational Migrator and try converting your SQL queries and stored procedures today.

March 25, 2024

Designing MongoDB Schemas for Large SQL Migrations

Unlike relational databases, MongoDB provides significant flexibility in designing schemas. MongoDB schema design should take into consideration both the relationship between the data elements and the data access patterns to evolve the most optimal schema for a workload. Designing the right schema for MongoDB is vital to getting the best cost-performance ratio for our workloads, and a bad schema design can massively impact application performance. Enterprise customers working on SQL migrations typically have hundreds of tables that need to be migrated into MongoDB. The Relational Migrator tool has dramatically simplified schema transformation, data migration, and real-time CDC from SQL databases to MongoDB. Despite that, customers still spend considerable time in MongoDB schema design. When we move beyond a few tables, designing the right MongoDB schema for SQL migrations brings with it several challenges. The relationship between the tables is often not codified at the DB level as Primary or foreign key constraints. Rather, this is enforced at the application level. We would need to spend a significant amount of time with the customer SMEs to surface these constraints. The cardinality of data relationships plays a key role in MongoDB schema design. However, this is challenging to gather in production settings. Data access patterns - Frequently used tables, tables accessed together, index usage cannot be quickly gathered in an automated way, Analyzing the existing SQL footprint in an ad hoc manner can become quite cumbersome, and running sub-optimal brute-force queries on the existing database may cause many issues. All these challenges ultimately result in a lot of manual effort for key customer SMEs during the schema design process. PeerIslands approach PeerIslands has been working with multiple Fortune 100 customers on SQL migrations and has evolved a pre-migration analysis approach for addressing this problem. The pre-migration analysis helps understand your current SQL footprint and access patterns and provides valuable insights into designing the right schema on MongoDB. The pre-migration analysis is fully automated and optimized for performance. We, at PeerIslands, recently used this approach for one of the largest US insurance providers. We rapidly analyzed close to 700+ tables and provided schema design recommendations for key migration initiatives they were working on. Below are some key areas of analysis that are covered by the pre-migration analysis: Data relationships and cardinality MongoDB provides significant flexibility for modeling relationships. SQL DB metadata provides rich data on the relationships between the various data elements. Depending on the complexity of the relationships, we can design them in MongoDB using embedded documents or references between documents. In addition to relationships, the cardinality of the existing data can be gathered from SQL indexes in a performant manner and can help us make a determination of correctly modeling one-to-few, one-to-many, and one-to-zillion relationships on MongoDB. Query analysis SQL DBs store the actual queries that have been executed in the environment. We can also obtain the top queries that have been executed. Analyzing the queries helps us understand which data elements are accessed together, read/write metrics, and more. Understanding this will help us have the frequently accessed data elements together on MongoDB. Query analysis helps us rapidly understand the referential integrity constraints and join patterns that exist in the data. This is helpful in situations where referential integrity is not codified in the DB layer. Table metadata Table and DB metadata can provide us with the overall lay of the land - the number of tables, SQL stored procedures, views, size of tables, and most frequently accessed tables, among others. This can help us get a quick understanding of the complexity of the migration. Indexes Indexes speed up data access and retrieval, and they can have a significant impact on the performance of the database. Understanding the current indexes in the SQL DBs and their usage provides good inputs for index creation on MongoDB. An understanding of the available indexes can also help perform data migration more efficiently. Is it possible to get all this information in a performant and automated way in a production setting? Putting pre-migration analysis into practice with Oracle We have developed playbooks to extract this information from several SQL databases. Here is an example - below is a quick technical preview of how we can get this information from one of the most common databases that we help migrate from - Oracle. Oracle data dictionary The Oracle Data Dictionary refers to a set of read-only tables and views that contain metadata about the database objects, such as tables, columns, indexes, constraints, views, users, and privileges. By querying these tables and views, users can retrieve detailed information about the structure and contents of the database objects. This information can be used to analyze the database schema, optimize queries, and perform other tasks related to database management and maintenance. We can also leverage the Data dictionary to gather the details we discussed. Metrics In Oracle, you can get table metrics such as table size, number of rows, number of blocks, and number of empty blocks using the DBA_TABLES view. Below is an example query to get the table metrics for all tables in a schema: SELECT owner, table_name, num_rows, blocks, empty_blocks, avg_row_len, chain_cnt, avg_space FROM dba_tables WHERE owner = 'ADMIN' ORDER BY NUM_ROWS DESC; Indexes To get all the indexes for a given schema in Oracle, we can use the DBA_INDEXES view. Here's an example query: SELECT owner, index_name, table_name, uniqueness, index_type, status FROM dba_indexes WHERE owner = 'ADMIN' ORDER BY table_name, index_name; Relationships To get the one-to-many relationships of all the tables in Oracle, we can query the data dictionary views ALL_CONSTRAINTS and ALL_CONS_COLUMNS. SELECT uc.table_name, rc.table_name AS referenced_table_name, ucc.column_name, uc.constraint_name, rc.constraint_name AS referenced_constraint_name FROM all_constraints uc JOIN all_cons_columns ucc ON uc.owner = ucc.owner AND uc.constraint_name = ucc.constraint_name JOIN all_constraints rc ON uc.r_owner = rc.owner AND uc.r_constraint_name = rc.constraint_name WHERE uc.constraint_type = 'R' AND uc.owner = 'ADMIN' ORDER BY uc.table_name, uc.constraint_name, ucc.position; Variations of the above query can help in getting one-to-one relationships and many-to-many relationships. Queries list To get a list of queries executed in Oracle, you can query the database’s V$SQL and V$SQLTEXT views. SELECT * FROM v$sql WHERE parsing_schema_name = 'ADMIN' ORDER BY last_active_time DESC; This query will retrieve all SQL statements executed by the specified schema and order them by the last time they were active. Top 10 queries The same v$sql tables can provide the Top 10 queries as well. SELECT sql_text, executions FROM v$sql ORDER BY executions DESC FETCH FIRST 10 ROWS ONLY; All of these queries only use Oracle metadata. The queries are highly performant and do not negatively impact instance performance. A similar approach can be used to gather this information for other SQL DBs like MySQL and MS SQL. Conclusion PeerIslands has been working with multiple customers on SQL migration projects where we handle end-to-end services, including schema design, data migration, app refactoring, SQL procedure conversion, and more. Moreover, we have also been an engineering partner for MongoDB Relational Migrator and have worked closely with the team on multiple initiatives over the last two years. We are working with the Relational Migrator team to fully automate pre-migration analysis for various SQL databases like Oracle, MySQL, MS SQL, Postgres, etc., and surface these insights to users as part of the Relational Migrator tooling. This will help save time for users and also prevent incorrect schema designs. Follow this space for more announcements on this topic. With extensive experience handling multiple SQL engagements in partnership with MongoDB's Professional Services team , a rich repository of tools, and close collaboration with Relational Migrator, PeerIslands is your "Partner" to accelerate your SQL migration journey. Reach out to partners@Mongodb.com to learn more.

October 20, 2023

Announcing MongoDB Relational Migrator

We’re thrilled to announce a new tool: MongoDB Relational Migrator . Relational Migrator simplifies the process of moving workloads from relational databases to MongoDB. We’ve heard it from more of our customers than we can count: organizations want to replatform existing applications from relational databases to MongoDB. MongoDB is more intuitive, more flexible, and more scalable than relational databases. Customers tell us that they need to move away from a relational backend in order to build new functionality into existing applications with increased agility, to make new and better use of enterprise data, or to scale existing services to volumes or usage patterns that they were never designed to handle. While some customers have successfully migrated some of their relational workloads to MongoDB, many have struggled with how to approach this challenge. Requirements vary. Can we decommission the old database, or does it need to stay running? Is this a wholesale replatforming, or are we carving out pieces of functionality to move to MongoDB? Some customers end up using a variety of ETL, CDC, message queue, streaming, pub/sub, or other technology to move data into MongoDB, but others have decided it’s just too difficult. It’s also important to think carefully about data modeling as part of a migration. Though it’s possible to naively move a relational schema into MongoDB without any changes, that won’t deliver many of MongoDB’s benefits. A better practice is to design a new and better MongoDB schema that’s more denormalized and potentially to take the opportunity to revise the architecture of the application as well. We want to make this process easier, which is why we’re developing MongoDB Relational Migrator. Relational Migrator streamlines the process of moving to MongoDB from a relational database and is compatible with Oracle, Microsoft SQL Server, MySQL, and PostgreSQL. Migrator connects to a relational database to analyze its existing schema, then helps architects design and map to a new MongoDB schema. When you’re ready, Migrator will perform the data migration from the source RDBMS to MongoDB. Migration can be a one-shot migration if you’re prepared for a hard cutover; soon, we will also support a continuous sync if you need to leave the source system running and continue pushing changes into MongoDB. We know that moving long-running systems to MongoDB still isn’t as simple as pushing a button, which is why Relational Migrator is designed to be used with assistance from our Field Engineering teams. For example, as part of a consulting engagement with MongoDB, a consulting engineer can help you evaluate which applications are the best candidates for migration, design and implement a new MongoDB backend, and execute the migration. Relational Migrator will significantly lower the effort and risk in transforming and replicating your data, leaving more time to focus on other aspects of application modernization. If you’ve been trying to figure out how to get off of a relational database, get in touch to learn more about MongoDB Relational Migrator.

June 7, 2022

Import and Export Your Charts Dashboards

With the latest release of MongoDB Charts, we’ve added the ability to export any dashboard to a file, as well as import those files back into a Charts project. To export a dashboard, simply choose Export Dashboard from the dashboard’s tile on the main Dashboards page. To Import a dashboard, choose the command from the menu next to Add Dashboard. Let’s look at some things you can do with this new capability. Copy dashboards between projects MongoDB Cloud allows you to create multiple projects, each of which has its own Atlas cluster. There are a bunch of reasons to use multiple projects, but one common example is to use them for different environments of an application, such as Development, QA or Production. Each Charts dashboard also lives within a project, and up until now there was no way of moving or copying a dashboard between projects. This could be problematic if a dashboard that was created in the Development project needed to be promoted to QA or Production. WIth the new Import/Export feature, you can simply export a dashboard from one project and import it into another. Version control your dashboards Taking this example one step further, now that you can export your dashboards to a file, you can treat them as code. That allows you to store the dashboard definitions in a source control system, making it easy to track changes, go back to specific versions, and keep the dashboards stored safely alongside other code artefacts used in your solution. Share dashboards with the community While some dashboards only make sense when connected to your own private data, others may be built on a commonly-available schema, whether that’s the Atlas sample data , some open data from the web, or data created by a reusable script . Once you’ve built a great dashboard using this generally available data, why not export it and share it with the world? Copy dashboards and change their data sources Whenever you import a dashboard from a file, Charts will give you the opportunity to “remap” the data sources used on the dashboard. This is important because the data in the new project might not match what was in the original project. You can use this feature to your advantage if you want to quickly change the data sources used on a dashboard, even if you are importing back into the same project. As an example, suppose you are a multinational company and used a different collection to track sales in each country you operate in. You could build a dashboard with a bunch of great charts, all linked to your “US Sales” collection. If you wanted to easily build an equivalent dashboard for your Australian sales, you could simply export the US dashboard, reimport it and remap your data sources on import to the “Australian Sales” collection. Migrate from Charts on-prem Finally, this feature provides a great option for Charts on-prem users who want to move to the cloud and take advantage of all of the new features only available to cloud users. While the on-prem version of Charts does not have the Export feature, on-prem users can contact MongoDB Support to obtain a script that will generate export files for on-prem dashboards. Those files can then be imported into your MongoDB Cloud projects using the new Import feature. We hope you’re as excited about this feature as we are! Remember, if you haven’t used Charts before, you can get started for free by signing up for MongoDB Cloud , deploying an Atlas cluster and activating Charts.

June 24, 2021

New Ways to Customize Your Charts

When it comes to building charts, we know that details matter. Small differences in layout, styling or composition can make a big difference in how well your chart communicates the story behind your data. That’s why we’ve just released a whole bunch of new capabilities in MongoDB Charts , giving you more control than ever. Here’s what’s new: Secondary Y Axis: Charts can be a great way to show correlation between two different datasets, but when their scales differ greatly it can be hard to see the correlation. By choosing to plot one more series on a secondary Y Axis, you can allow them to make the most of the available space and highlight any interesting relationships. Secondary Y Axis can be enabled on Grouped Column, Discrete Line, Continuous Line and Continuous Area charts. Legend Position: Chart legends can now be moved to the top, right or bottom of your chart, or hidden altogether. “All Others” Group: Charts has long allowed you to limit a chart to show, say, just the top 10 values. The new “All Others” option allows you to add an additional bar or donut segment that shows the value of all other categories not included in the limit. “Count by Value” aggregation: Building multi-series charts is now easier than ever, with the new “Count by Value” aggregation option. This will automatically create series from each distinct value found in a field. String binning with Regular Expressions: Last month we introduced binning of string values, allowing you to choose the exact values to go into each bin. This month we’ve extended this further by allowing you to use Regular Expressions to assign values to a bin based on powerful patterns. Scatter Mark formatting: We’ve ramped up the customization options available on Scatter charts, allowing you to control the size, border thickness and opacity of each plotted mark. Line Dash Styles: A new option on Discrete and Continuous Line charts results in a different dash style for each series, making it easier to differentiate the series and improve the accessibility of your charts. Here’s one example of a chart that shows off the secondary Y axis, custom legend position and line dash styles: And here’s another, showing the effect you can get by customizing your scatter chart’s mark style: We hope you enjoy these new charting capabilities, but we’re not done yet! Over the next couple of months, we’ll be moving our focus to Table charts, adding options like conditional formatting, text wrapping and column pinning. If you have any other ideas for new customization features, please let us know using the MongoDB Feedback Engine . If you haven’t tried Charts yet, you can get started for free by signing up for MongoDB Atlas and deploying a free tier cluster.

November 18, 2020