Skip to content
Home » What’s new in SQL Server 2025 CTP 2.0

What’s new in SQL Server 2025 CTP 2.0

Three years ago, when the first public preview of SQL Server 2022 (CTP 2.0) was announced, I was a few months in at the SQL Docs team, and had very little to do with that release.

Three years later, the team is slightly larger (we’re called Data Docs now), and I was much more involved with helping scores of people merge the content for SQL Server 2025 (CTP 2.0).

How much effort was the documentation update?

By the numbers for the sql-docs repository (just one of several repositories we manage):

  • Over 1,250 Git commits (which we squashed)
  • Over 600 files changed (including more than 120 new files)
  • Around 50 unique contributors

New administrator features

The features in the first public preview of SQL Server 2025 (CTP 2.0) that I’m most interested in are:

  1. tmpfs support for tempdb in SQL Server on Linux. This is the technical way of saying “run tempdb on a RAM drive”. For production workloads, that’s a big deal. And, for non-production workloads (I’m thinking developer containers), there’s nothing stopping you from running all your CI/CD databases in memory.
  2. Standard Developer edition. Finally, am I right? MVPs and customers alike have been asking Microsoft for decades to provide a development environment that matches the more affordable SQL Server Standard edition, and now it’s here with SQL Server 2025 (CTP 2.0).
  3. Optimized locking. This feature was introduced in Azure SQL a few months ago, and was an obvious upgrade for SQL Server.
  4. Accelerated database recovery in tempdb. Another finally! Turn this on with optimized locking, along with tempdb on tmpfs, and you’re unlocking two of the biggest performance bottlenecks in SQL Server today.
  5. Remove memory-optimized filegroups. I know this is a little thing, but it’s nice to know you can clean up your files if you’re not using In-Memory OLTP.

New developer features

There are also some developer features that look pretty interesting to me:

  1. JSON data type and new functions! This isn’t your grandmother’s NVARCHAR(MAX), but a proper native binary JSON data type. I might have something to say about this soon, as part of my “How data types are stored” series.
  2. Improvements to sp_executesql to reduce compilation storms. This happens when lots of queries compile at the same time, which is common on systems that run dynamic Transact-SQL.
  3. Fuzzy string matching. SQL Server 2025 (CTP 2.0) introduces four functions to help you calculate the difference between two strings. I could have used this for pattern matching stuff I wrote 10 years ago in CLR code.
  4. Regular expressions. Interesting? Definitely. Powerful? Sure. Just don’t expect this to be fast.

I think this is a strong release, and I haven’t even mentioned a lot of the stuff that excites other people, like vectors, Query Store improvements with intelligent query processing, performance improvements for Halloween protection, and support for Base64 encoding and decoding.

Read more about what’s new in this release in the official documentation.

3 thoughts on “What’s new in SQL Server 2025 CTP 2.0”

  1. I just saw the SQL2025 announcement and wondered if anyone could talk about what to expect in SQL2025 and I thought of you and bingo you have written about the latest release. On June 1st the Edmonton Group will make this available https://sessionize.com/edmdata-2025 so would you want to present to the group on SQL2025.
    It would be good to hear from you again.

    Chris

  2. I wish Microsoft would bring some “exciting” improvements to Power BI Reporting Services; I am talking about the paginated reports, no Power BI reports.

  3. I’d like to concentrate on two significant features that I hope to see in future SQL Server releases: the ability to perform selective table restorations and how this could integrate with current replication technologies, as well as the capability to execute recoveries independent of the version.

    1.) Selective Table Restoration
    SQL Server does not have a native way to restore only one table from a backup. This means that if you need to recover a single table from a backup, you must recover the whole database. This can take a lot of time and resources, especially for large databases.

    Restoring tables selectively would let users recover only the tables they want, saving time and resources. This feature would be handy in situations where a particular table has been mistakenly changed or deleted. Instead of having to recover the whole database, users could just recover the relevant table from a backup. This would be similar to Recovery Manager (RMAN) Table Point In Time Recovery (PITR) in Oracle.

    2.) Log Shipping Subsets of Tables
    As you know Log shipping is a solution for high availability and disaster recovery that SQL Server offers. It works at the database level, meaning that it copies the whole database from one server (the primary server) to another (the secondary server).

    I have run across several situations where copying the whole database is not needed or wanted. For example, a user might only need to copy a subset of tables for reporting reasons. In such cases, the option to log ship a subset of tables would be very helpful and reduce the data footprint tremendously lots of my databases are over 4 TB, and we only need a subset downstream for reporting.

    For example a Subset Restore for Initializing Transactional Replication on VLDB’s.
    Initializing transactional replication can be difficult when dealing with very large tables, especially those with over a billion rows.

    The snapshot agent, which is in charge of initializing the replication by generating a snapshot of the published database objects and data, does have trouble with such large tables. This can cause performance issues and long initialization times.

    One possible solution to this problem is the addition of a subset initialization restore feature. This would allow users to restore only the necessary tables for publications, rather than the entire database. By using subset initialization backup instead of the snapshot agent would make the process more efficient.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.