Visual Studio 2022 Support!

Hello! We have very good news today. We just released Visual Assist 2021.5 and it has our official support for the Visual Studio 2022 release.

This blog could be as short as that sentence, but I’d like to write a bit more about our support and how we got here. Meanwhile I recommend if you’re using VS2022 you download and install 2021.5 now!


Historically it’s been very important to us to release support for new versions of Visual Studio very quickly, and if you’ve read our blog posts this year about VS2022, you’ll have read me say that before. While many customers stay on older versions for some time, we have a lot of people who upgrade immediately, so we’ve always put a lot of emphasis on being able to ship a version of Visual Assist supporting new versions of Visual Studio quickly. While I’ve been product manager here for almost three years, this is the first new major version of Visual Studio during that time, and I and the whole team were keen to continue that speedy-support tradition.

We started work supporting VS2022 early, and we’ve shared our progress over this past nine months about the work we’ve been doing to support VS2022, with beta support for Previews 3, skipping 4 due to a breaking bug, and 5, 6, and 7/RC. We released Visual Assist 2021.4 shortly before Visual Studio 2022 was released, and many of you are using it with VS2022 already.

Visual Studio 2022 was a large change from previous versions. Not only did it change to 64-bit, but there are many new APIs as well, and these APIs change the interaction model from synchronous to asynchronous interaction. This is a pattern Visual Studio has been following for several years (and we encourage it—it really helps the IDE) but as you may know migrating from any sync to async model is rarely trivial. Usually, the majority of the work for each new Visual Studio release is around adapting to API changes, and that was the case here too. In fact, the most major bug we saw using Visual Assist (abbreviated VAX) 2021.4 with VS2022 and which was one of the issues we fixed for today’s official support, an issue where the code suggestions window sometimes did not show in the right place, was related to the move to one specific async API.


  • We released Visual Assist 2021.4 on October 29.
  • Visual Studio 2022 was released on November 8, nine days later.
  • VAX 2021.4 overall worked pretty well with the final VS2022 build
    • But both we and some customers found a few more issues, and we’ve spent the past two weeks since VS’s release resolving them
  • VAX 2021.5 with official support for Visual Studio 2022 was released on Nov 22!

Official Support for VS2022

Yesterday afternoon US time we posted Visual Assist 2021.5 on our website. We have a rolling release mechanism and in about a week you should see in-IDE notifications about the new release, followed a couple of weeks later with the new version being available in the Visual Studio Marketplace. However you can directly download and install it now.

We’ve been working on VS2022 for something like nine months now and we’re really happy to have Visual Assist publicly available with Visual Studio 2022 support. We hope it is useful to you!

A note of thanks: VS2022 was a large change from previous versions, and Microsoft has been very open and helpful. We’re very grateful to them for their communications with us, the beta program, and their assistance while we’ve added support. Thank you!

I want to note as well that though as PM I get to write these posts, I really do very little, and all the credit for this release and VS2022 support goes to our amazing team. Thank you!

Visual Assist 2021.4 is released! (And notes on Visual Studio 2022)

We are pleased to have just released Visual Assist 2021.4. VAX uses a rolling release mechanism, so it will be a couple of weeks until VAX notifies you in-product and a couple more before it’s available on the Visual Studio store, but you can download Visual Assist 2021.4 today from our website.

VAX 2021.4 is a quality-focused release. Our last release, 2021.3, was mostly focused on supporting the upcoming Visual Studio 2022 Previews. That early work on support for VS2022 means that when the official release of VS2022 is out, we’ll be able to ship official support very fast. (More on this below.)

However, not everyone upgrades to a new Visual Studio release immediately — in fact many people have very good reasons for staying on older versions for quite some time! — and we want to focus on providing what all our customers across many versions need. (We still support VS 2005!) That’s the focus for this version. This release, as a quality release, focuses on fixing bugs and adding changes for everyone.

Our release notes contain full info, but some notable changes include support for the new External Include Directories property in Visual Studio, and updating the Code Inspection engine to LLVM/Clang version 12.0.1. There are a plethora of bug fixes as well. All up we feel this release is a solid update for you no matter which version of Visual Studio, and for all ways that you use Visual Assist.

Visual Studio 2022

Swiftly supporting new versions of Visual Studio is very important to us, because we understand it’s important to many of you, and we hope you’ve enjoyed seeing VAX working in many of the Visual Studio 2022 Previews. We expect VS2022 to be released very soon. In fact, as we were preparing this release, Previews 5, 6 and 7 came out, the pace increasing so fast that our installer only mentions support for Preview 6, but we do in fact support Preview 7 as well. At the same time we’ve seen significant stability improvements with each newer preview.

We don’t want to delay our release schedule to wait until VS2022 is shipped, especially because this release is focused on what customers using other versions need, which is why we’re releasing now. But since VS2022 will be released so soon, and since we are very ready for that to happen, you can expect a swift mini-update from us adding official support.

In other words:

  • VAX 2021.4 supports Visual Studio 2022 Previews 5 and 6 (with issues leading to hangs), and 7/RC3 (fully)
  • We’re eagerly waiting for VS2022 to be released, and we’ll add official support for it very quickly when it’s out
Visual Assist 2021.4 running in Visual Studio 2022 Preview 7 (Release Candidate 3)
Visual Assist 2021.4 running in Visual Studio 2022 Preview 7 (Release Candidate 3)

We recommend you install Visual Assist 2021.4 now, and we look forward to shipping official support for Visual Studio 2022 very soon.

Visual Assist support for Visual Studio 2022 Previews!

Visual Assist support for Visual Studio 2022 Previews!

There’s a lot of interest in the developer community about the new version of Visual Studio, which is in preview currently. This week we released Visual Assist 2021.3 (build 2420), and Visual Assist includes beta support for the Visual Studio 2022 Previews.

VAssistX menu open in the Visual Studio 2022 Extensions menu
Visual Assist 2021.3 running inside Visual Studio 2022 Preview 3

Visual Studio Preview

Visual Studio (VS) 2022’s main change—and it’s a significant one—is that it will become a 64-bit process. Since Visual Assist (VA or VAX) runs as a plugin, in-process, we needed to build a 64-bit version of our plugin DLL. Those who have upgraded 32-bit code to 64-bit code before know that, even in well-architected code, it takes some work even just to review code to ensure it is correct. In addition, the new version adds and modifies a number of APIs we rely on to interact with the IDE. Adapting to those was the most significant change for us.

We’ve tested fully against VS 2022 Preview 2, and in fact the installer says ‘Preview 2’. There are some known regressions:

  • VA Quick Info not appearing when triggered by keyboard in VS 2022 [case: 146063]
  • Source Links example plugin does not load in VS 2022 [case: 146012]
  • Changing from Cascadia to another font for italicized system symbols requires restart in VS 2022 [case: 145979]
  • plus a few others.

Visual Studio 2022 Preview 3 was released two days ago—overlapping timing with our release—and our regression tests are showing some failures. Currently we believe those are because of changed behaviour in a Visual Studio API which we use when verifying in-IDE behaviour (ie not an issue in VAX itself), but it is always possible that once that is resolved further test failures will need to be resolved. However, we believe the current build is well worth trying out on Preview 3 as well.


Many of our customers strain the Visual Studio IDE, with many plugins and SDKs installed. Both to help them, and because we believe it’s part of being a good member of the Visual Studio ecosystem where our plugin sits alongside others, last November we greatly reduced the in-process memory usage largely through (spoiler: the full blog is worth reading) use of memory-mapped files.

Now that Visual Studio 2022 is a 64-bit process, that work is not necessary for VS2022. For older versions of Visual Studio, those techniques are still used, so if you’re using VS 2019 or even VS 2005 with Visual Assist 2021.3, you’ll still benefit from the lighter memory impact within the IDE.

When we did that work, we also focused on performance to ensure that the changes to memory access had either zero or a positive impact. The blog notes that for heavy usage, we had equal performance; for small projects, VAX actually was a bit faster. Despite no longer needing the memory usage code we added, VS 2022 benefits from that performance work as well, plus some more work we’ve done while adding support. Since it’s a beta build, and Visual Studio itself is in preview, we do not have hard numbers. But a rough guideline is that an operation like Find References that previously may have taken (say) two minutes will now take about one minute twenty seconds, or about two thirds the time.


It’s historically been very important to us to have swift support for new versions of Visual Studio, and this work is our foundation for quickly officially supporting Visual Studio 2022 when it is officially released. While the main focus of this release was VS 2022 support, there are other changes as well in 2021.3, which we’ll document in the What’s New. We know we have many customers using older versions of Visual Studio, and as well as those improvements today you can look forward to further improvements focusing on other, non-VS areas as we switch back to a more normal focus in our next release.

We’re really happy to be able to ship beta support for the Visual Studio 2022 previews, and providing an even faster Visual Assist is a happy bonus. We’ll continue to work and look forward to shipping full support when Visual Studio publishes its final release.

Visual Assist 2021.3 is available for download on our website now, and will be on the Visual Studio Marketplace in a few days. Enjoy!

Busy Busy Busy!

Are you busy? I know we are. Our development team has been taking a hard look at what’s going on in the development landscape, and I thought we could share a few things that we’ve got going on behind the scenes:

Visual Studio 2022 Public Preview is Here

Visual Studio 2022 is in public beta, with some exciting changes – they’re moving to 64-bit! The team at Microsoft has been super supportive, and we’ve been working on adding full support as we learn more and more about the changes being made. Much of this work is just making sure we’re ready for the release, but there will be some improvements made along the way. Don’t worry. If you’re not ready to move to VS 2022, we’re not dropping support for 2019, or 2017, 2015, or… (we do support a lot of versions, don’t we?) If you are an early adopter, be sure to opt-in for beta downloads. We’ll be releasing a new version with initial support for VS 2022 in the coming weeks!

Unreal Engine 5 Early Access

The team at Epic has been a great partner and supportive of our development efforts, and granted us an early look at some of the things we might encounter with the update. Again, much of the effort is just making sure everything works for UE5 the same way it does for UE4. While initial support is there, we know there will be more to come. We know our game devs are excited about this update and so are we. Again, be sure to sign up for beta to get the latest updates from our end. 

Windows What?

While most (I mean myself) weren’t expecting a big update from Windows, it looks like it is indeed happening. This won’t have as big of an impact on Visual Assist, as we’re more dependent on the IDE than anything else. However, we’re more excited to see what the future holds for Windows and what the downstream effects will be. If you’re in the know, feel free to comment. Either way, we’ll be supporting any moves made as we are able.

All this to say we’re busy and excited about this year in development news. As things get back to somewhat normal in day-to-day life, it looks like we’ll be making up for lost time from 2020 fairly quickly.

Looking for a New Database?

A last note for Visual Studio users: our sister company InterBase has released a new plugin on the VS extension store, adding support for ADO.NET to an already amazing database. We know everyone’s needs around a database are different and think InterBase is a solid mix of lightweight and full-featured with solid encryption all around. Learn more and give it a try here.

How to Set Up VA

Visual Assist is a coding productivity tool for C++ and C# developers. It extends Visual Studio to make the programming experience better by providing tools for understanding code, checking code, and writing code. Some of its benefits include fast navigation, code inspection and modernization, many refactorings, code correction, Unreal Engine 4 support (for which it is famous), and code assistance. In this article, we will walk through the steps needed to set up Visual Assist and show you a very brief overview of how to use it.

1. Installing Visual Assist

First, visit the Whole Tomato Visual Assist website to download the installer. You can use Visual Assist for free for a month as a trial, or alternatively, you can choose to buy a license from the get-go. Keep in mind that once your free trial is over, you will need to buy the license to keep using it.

On the main page, click the red “Try it for free” button, and on the next page, click “Download Free Trial” to download the installer.  

Before downloading the file, you will need to fill in some required information.

Once you have completed this step, your download will begin.

When you are done downloading the installation file, open it to begin the installation process. Visual Assist supports many versions of Visual Studio, and you can install it for any version you have on your computer.

Once the installation is complete, you can immediately start making use of Visual Assist.

When you run Visual Studio, Visual Assist will show you a tip. You can turn this off, but we recommend you keep it on since it shows helpful information about what you can achieve each time you start.

And that’s it! Visual Assist is installed. 

Visual Assist is very powerful but has a very low UI. That’s great for power users who know it well, but if you’re new to using it, it can be harder to discover all the things it can do. Read on for a short intro about using it with a new application. 

Using Visual Assist

The following few steps show you how to create a new application in Visual C++. If you’re familiar with this, you can skip ahead to where we demonstrate Visual Assist.

When starting Visual Studio, you will be directed to the Welcome page, where you will be able to create new projects from the pop-up window. Click on “Create new project” to move to the next step.    

You can create different types of applications using Visual Studio. In this guide, we will create a Windows C++ application for the purpose of illustration. 

Select Windows Desktop Application (note the C++ label) to start building your app. You can select whichever type of application you are interested in building. Once this is complete and you have successfully configured your project, you will finally be able to make use of Visual Assist. Click on “Create” and watch the magic unfold.  

Note that Visual Assist is low UI to prevent it from getting in your way while making use of Visual Studio. Click on “Extensions” in the toolbar and navigate to “VAssistX,” which will present a list of functionalities that you can explore.

You can also access the Quick Action and Refactoring menu by using the following shortcut (Shift+Alt+Q). For a comprehensive list of keyboard shortcuts, click here.

3. Like Visual Assist?

When your trial comes to an end, you will be prompted to use a license. Select “Buy” to purchase a license or “Register” to use an existing one. 

By now you should be able to set up Visual Assist without any problem and start making use of it alongside Visual Studio. To find out more about Visual Assist and some of its functions, and why everyone from developers in major studios right down to students uses it, click here.

How To Modernize With Visual Assist Part 2

How To Modernize With Visual Assist Part 2

In the previous article, you read about five popular techniques to improve your projects and apply several Modern C++ patterns. Here’s a list of five more things! We’ll go from the override keyword to nullptr, scoped enums, and more. All techniques are super-easy with Visual Assist!

1. Override

Let’s have a look at the following code with a simple class hierarchy:

using BmpData = std::vector<int>;

class BaseBrush {
virtual ~BaseBrush() = default;

virtual bool Generate() = 0;

virtual BmpData ApplyFilter(const std::string&) { return BmpData{}; }

class PBrush : public BaseBrush {
PBrush() = default;

bool Generate() { return true; }

BmpData ApplyFilter(const std::string& filterName) {
std::cout << "applying filter: " << filterName;
return BmpData{};

BmpData m_data;

When you run VA code inspections, you’ll immediately see that it complains about not using the override keyword on those two virtual methods.


In short, the override keyword allows us to explicitly specify that a given member function overrides a function from a base class. It’s a new keyword available since C++11.

As you might already know, you can go to VA Code Inspection Results and apply the fixes automatically. You’ll get the following code:

class PBrush : public BaseBrush {
PBrush() = default;

bool Generate() override;

BmpData ApplyFilter(const std::string& filterName) override;
BmpData m_data;

What are the benefits of using override?

The most significant advantage is that we can now easily catch mismatches between the virtual base function and its override. When you have even a slight difference in the declaration, then the virtual polymorphism will might not work. 

Another critical point is code expressiveness. With override, it’s effortless to read what the function is supposed to do.

And another one is being more modern as this keyword is also available in other languages like C#, Visual Basic, Java, and Delphi.

2. nullptr

When I work with legacy code, my Visual Assist Code Inspection Result is often filled with lots of the following items:


This often happens with code like:

if (pInput == NULL) {
    LOG(Error, "input is null!")


pObject->Generate("image.bmp", NULL, NULL, 32);

Why does Visual Assist complain about the code? It’s because NULL is just a define and means only 0, so it doesn’t mean a null pointer, but it means 0. This is also a problem when you have code like:

int func(int param);
int func(float* data);
if you call:

You could expect that the function with the pointer should be called, but it’s not. That’s why it’s often a guideline in C or early C++ that suggests not making function overrides with pointers and integral types.

The solution? Just use nullptr from C++11.

nullptr is not 0, but it has a distinct type nullptr_t.

Now when you write:


you can expect the proper function invocation. The version with func(float* data) will be invoked.

Not to mention that nullptr is a separate keyword in C++, so it stands out from the regular code. Sometimes NULL is displayed in a different color, but sometimes it is not.

Visual Assist makes it super easy to apply the fix, and it’s a very safe change.

3. Convert enum to scoped enum

Another pattern that is enhanced with Modern C++ is a way you can define enums.

It was popular to write the following code:

enum ActionType {

ActionType action = atNone;
Since C++11 it's better to you can define this type in the following way:
enum class ActionType {

ActionType action = ActionType::None;

What are the benefits of such transformations?

  • They don’t pollute the global namespace. As you may have noticed, it was often necessary to add various prefixes so the names wouldn’t clash. That’s why you see atNone. But in the scoped enum version, we can write None.
  • You get strong typing, and the compiler will warn when you want to convert into some integral value accidentally.
  • You can forward scope enums and thus save some file dependencies.

What’s best about Visual Assist is that it has a separate tool to convert unscoped enums to enum classes, all with proper renames and changes in all places where this particular type was used. Right-click on the type and select “Convert Unscoped Enum to Scoped Enum.”. This opens a preview window where you can see and select which references will be replaced.


Read more in Convert Unscoped Enum to Scoped Enum in the VA documentation.

4. Use more auto

One of the key characteristics of Modern C++ is shorter and more expressive code. You saw one example where we converted from for loops with long names for iterators into a nice and compact range-based for loops.

What’s more, we can also apply shorter syntax to regular variables thanks to automatic type deduction. In C++11, we have a “reused” keyword auto for that.

Have a look:

std::vector<int> vec { 1, 2, 3, 4, 5, 6, 7, 8};
std::vector<int>::const_iterator cit = vec.cbegin();

We can now replace it with:

std::vector<int> vec { 1, 2, 3, 4, 5, 6, 7, 8};
auto cit = vec.cbegin();

Previously, template-type deduction worked only for functions, but now it’s enabled for variables. It’s similar to the following code:

template <typename T> func(T cit) {
    // use cit...
std::vector<int> vec { 1, 2, 3, 4, 5, 6, 7, 8};
func(vec.cbegin()); // template deduction here!

Following are some other examples:

auto counter = 0;   // deduced int
auto factor = 42.5; // deduces double
auto start = myVector.begin(); // deduces iterator type
auto& ref = counter; // reference to int
auto ptr = &factor;  // a pointer to double
auto myPtr = std::make_unique<int>(42);
auto lam = [](int x) { return x*x; };

Have a look at the last line above. Without auto, it wouldn’t be possible to name a type of a lambda expression as it’s only known to the compiler and generated uniquely.

What do you get by using auto?

  • Much shorter code, especially for types with long names
  • Auto variables must always be initialized, so you cannot write unsafe code in that regard
  • Helps with refactoring when you want to change types
  • Helps with avoiding unwanted conversions

As for other features, you’re also covered by Visual Assist, which can automatically apply auto. In many places, you’ll see the following suggestions:


This often happens in places like

SomeLongClassName* pDowncasted = static_cast<SomeLongClassName*>(pBtrToBase); 
// no need to write SomeLongClassName twice:
auto pDonwcasted = static_cast<SomeLongClassName*>(pBtrToBase);


unsigned int val = static_cast<unsigned int>(some_computation / factor);
// just use:
auto val = static_cast<unsigned int>(some_computation / factor);

As you can see, thanks to auto, we get shorter code and, in most cases, more comfortable to read. If you think that auto hides a type name for you, you can hover on it, and the Visual Studio will show you that name. Additionally, things like “go to definition” from Visual Assist work regularly.

5. Deprecated functionality

With C++11, a lot of functionality was marked as deprecated and was removed in C++17. Visual Assist helps with conversion to new types and functions.

For example, it’s now considered unsafe to use random_shuffle as it internally relied on simple rand(), which is very limited as a random function generator.

std::vector<int> vec { 1, 2, 3, 4, 5, 6, 7, 8 };
std::random_shuffle(vec.begin(), vec.end());

Visual Assist can replace the above code with the following:

std::shuffle(vec.begin(), vec.end(), std::mt19937(std::random_device()()));

Additionally, you’ll get suggestions to improve old C-style headers and convert them into C++ style:


Some other handy fixes

We have covered a few inspections, but there’s much more! Here are some exciting items that you might want to see:

The whole list is available here: List of Code Inspections


Thanks for reading our small series on Clang Tidy, Code Inspections, and Visual Assist. We covered a lot of items, and I hope you learned something new. The techniques I presented in most cases are very easy to use, especially thanks to VA support, and you can gradually refactor your code into Modern C++.

Technical Deep Dive: Reducing Memory Consumption in Visual Assist build 2393

Technical Deep Dive: Reducing Memory Consumption in Visual Assist build 2393

November 2020’s release of Visual Assist had some significant memory usage improvements, great for those with large projects. Here’s how we did it.

Written by David Millington (Product Manager), Goran Mitrovic (Senior Software Engineer) and Christopher Gardner (Engineering Lead)

Visual Assist is an extension which lives inside Visual Studio, and Visual Studio is a 32-bit process, meaning that its address space, even on a modern 64-bit version of Windows, is limited to a maximum of 4GB. Today 4GB doesn’t always go very far. A typical game developer, for example, may have a large solution, plus perhaps the Xbox and Playstation SDKs, plus other tools – already using a lot of memory – plus Visual Assist. A lot of Visual Assist is built around analysis of your solution, and that requires storing data about your source code, what we call the symbol database. Sometimes, with all of these things squashed into one process’ memory space together, some users with large and complex projects start running out of memory.

The latest version of Visual Assist (build 2393, 28 Oct 2020) reduces in-process memory consumption significantly. It’s a change we think will help many of you who are reading this, because those with large solutions who run into memory issues should find those memory issues alleviated or disappearing completely; and those with small or medium solutions may find Visual Assist runs a little faster.

Plus, it’s just part of being a good member of the Visual Studio ecosystem: as an extension relied on by thousands of developers, yet sitting in the same shared process as other tools, we should have as little impact as we can.

Chart showing memory usage from a Find References operation in the Unreal Engine source code. The new build of Visual Assist uses 50% the memory of the previous build (152 MB vs 302 MB)
Difference in memory usage running a Find References on the Unreal Engine 4.26 source code. Smaller is better.

The chart above shows that the new build of Visual Assist reduces VA’s memory usage by about 50% – that is, half as much memory is used. You can see more charts below, and we consistently see 30%, 40% and 50% memory savings.

We’d like to share some technical information about the approaches we took to achieve this, and we hope that as C++ or Windows developers you’ll find the work interesting. This is a fairly long blog, and starts off light and gets more technical as it goes on. It covers:


Many readers may know this, but to follow this post here’s a quick primer on some concepts. Feel free to skip to the next section if you’re already familiar.

Address space refers to the total amount of memory an application can use. It’s usually limited by the size of a pointer (because a pointer points to memory): a 32-bit pointer can address 232 (about 4 billion) bytes, which is 4GB. So we say that a 32-bit process has a 32-bit address space, or an address space of 4GB. (This is hand-wavy – although true, on 32-bit versions of Windows this 4GB was split between kernel and user mode, and a normal app can only access usermode memory; depending on settings, this actually meant your app could access only 2GB or 3GB of memory before it could not allocate any more. On a 64-bit version of Windows, a 32-bit app has the entire 4GB available. Plus, there are techniques to access more memory than this even on 32-bit systems, which is getting ahead of where this blog post is going.)

This address space is a virtual address space; the application uses virtual memory. This means that the 4GB is not actually a sequential block of memory in RAM. Pieces of it can be stored anywhere, including on disk (swapped out) and loaded in when needed. The operating system looks after mapping the logical address, the address your application uses, to the actual location in RAM. This is important because it means that any address you use has what backs it, where it actually points to, under Window’s control.

A process is how your application is represented by the operating system: it is what has the virtual address space above, and contains one or more threads which are the code that runs. It is isolated from other processes, both in permissions and memory addresses: that is, two processes do not share the same memory address space.

If you’re really familiar with those concepts, we hope you’ll forgive us for such a short introduction. OS architecture including processes, threads, virtual memory etc is a fascinating topic.

Background – Where Visual Assist Uses Memory

Visual Assist parses your solution and creates what we call the ‘symbol database’, which is what’s used for almost every operation – find references, refactoring, our syntax highlighting (which understands where symbols are introduced), generating code, etc. The database is very string-heavy. It stores the names of all your symbols: classes, variables and so forth. While relationships between symbols can be stored with relatively little memory, strings themselves take up a lot of space in memory.

Our first work on memory optimization focused on the relationships, the links between data, and metadata stored for each symbol, and we did indeed reduce the memory they used. But that left a large amount of memory used by strings generated from source code. In addition our string allocation patterns meant strings were not removed from memory when memory became full and there was allocation pressure, but instead at predefined points in code, and that also increased the risk for large projects of getting out of memory errors.

Clearly we needed to solve string memory usage.

We researched and prototyped several different solutions.

String Storage Optimizations

Stepping back in time a little, string memory usage is obviously not a new problem. Over time, Visual Assist has used several techniques to handle string storage. While there are many possible approaches, here’s an overview of three, one of which Visual Assist used and two of which it has never used, presented in case they are new or interesting to you.

String interning addresses when there are many copies of the same string in memory. For example, while Visual Assist has no need to store the word “class”, you can imagine that in C++ code “class” is repeated many times throughout any solution. Instead of storing it many times, store it once, usually looked up by a hash, and each place that refers to the string has a reference to the same single copy. (The model is to give a string to the interning code, and get back a reference to a string.) Visual Assist used to use string interning, but no longer: reference counting was an overhead, and today the amount of memory is not an issue itself, only the amount of memory used inside the Visual Studio process.

While we’re discussing strings, here are two other techniques that are not useful for us but may be useful for you:

  • To compress strings. Visual Assist never used this technique: decompression takes CPU cycles, and strings are used a lot so this would occur often.
  • To make use of common substrings in other strings: when a string contains another string, you can refer to that substring to save memory. A first cut might use substrings or string views. This is not useful for us due to the overhead of searching for substrings, which would be a significant slowdown, and the nature of the strings Visual Assist processes not leading to enough duplication of substrings to make it worthwhile. A more realistic option (in terms of performance, such as insertion) for sharing string data than shared substrings would be to share data via prefix, such as storing strings in tries. However, due to the nature of the strings in C++ codebases, our analysis shows this is not a valuable approach for us.

The approach we took to the recent work was not to focus on further optimising string storage, but to move where the string storage was located completely.

Moving Out Of Process: Entire Parser

Visual Assist is loaded into Visual Studio as a DLL. The obvious approach to reducing memory pressure in a 32-bit process is to move the memory – and whatever uses it – out of process, which is a phrase used to mean splitting your app up into multiple separate processes. Multi-process architectures are fairly common today. Browsers use multiple processes for security. Visual Studio Code hosts extensions in a helper process, or provides code completion through a separate process.

As noted in the primer above, a second process has its own address space, so if we split Visual Assist into two, even if the second was still a 32-bit process it could double the memory available to both in total. (Actually, more than double: the Visual Studio process where Visual Assist lives has memory used by many things; the second process would be 100% Visual Assist only. Also, once out of Visual Studio, we could make that second process 64-bit, ie it could use almost as much memory as it wanted.) Multi-process architectures can’t share memory directly (again hand-wavy, there are ways to read or write another process’s memory, or ways to share memory, but that’s too much for this blog); generally in a multi-process architecture the processes communicate via inter-process communication (IPC), such as sockets or named pipes, which form a channel where data is sent and received.

There were two ways to do this. The first is to move most of the Visual Assist logic out to a second process, with just a thin layer living inside Visual Studio. The second is just to move the memory-intensive areas. While the first is of interest, Visual Assist has very tight integration with Visual Studio – things like drawing in the editor, for example. It’s not trivial to split this and keep good performance. We focused instead on having only the parser or database in the second process.

Prototyping the parser and database in a second process, communicating with IPC back to the Visual Studio process, showed two problems:

  • Very large code changes, since symbols are accessed in many places
  • Performance problems. Serializing or marshalling data had an impact. Every release, we want VA to be at least the same speed, but preferably faster; any solution with a performance impact was not a solution at all

Moving Just Memory

Multi-process is not a required technique, just one approach. The goal is simply to reduce the memory usage in the Visual Studio process and move the memory elsewhere; we don’t have to do that by moving the memory to another process—and this is a key insight. While multiprocess is a fashionable technique today, it’s not the only one for the goal. Windows provides other tools, and the one we landed on is a memory mapped file.

In the primer above, we noted that virtual memory maps an address to another location. Memory-mapped files are a way to map a portion of your address space to something else – despite the name, and that it’s backed by a file, it may not be located on disk. The operating system can actually store that data in memory or a pagefile. That data or file can be any size; the only thing restricted is your view of it: because of the 32-bit size of your address space, if the mapped file is very large you can only see a portion of it at once. That is, a memory mapped file is a technique for making a portion of your address space be a view onto a larger set of data than can fit in your address space, and you can move that view or views so, although doing so piece by piece, ultimately a 32-bit process can read and write more data than fits in a 32-bit address space.

Benchmarking showed memory mapped files were significantly faster than an approach using any form of IPC.

The release notes for this build of Visual Assist referred to moving memory ‘out of process’. Normally that means to a second process. Here, we use the term to refer to accessing memory through a file mapping, that is, accessing memory stored by the OS, and potentially more memory than can fit in the process’ address space. Referring to it as out of process is in some ways a holdover from when the work was planned, because we had initially thought we would need multiple processes. But it’s an accurate term: after all, the memory is not in our process in a normal alloc/free sense; it’s an OS-managed (therefore out of process) resource into which our process has a view.

Note re ‘memory mapped files were significantly faster than … using any form of IPC’: sharing memory mapped files can be a form of IPC as well – you can create the mapped file in one process, open in another, and share data. However, we had no need to move logic to another process – just memory.

Chunk / Block Structure and Allocation / Freeing

We already mentioned that the data we stored in the memory mapped file is strings only. In-process, we have a database with metadata fields; in the mapped view are all strings. As you can imagine any tool that processes source code uses strings heavily; therefore, there are many areas in that mapped memory we want to access at or near-at once.

Our implementation uses many file mappings to read and write many areas at once. To reduce the number, each is a fixed size of 2MB. Every one of these 2MB chunks is a fixed heap, with blocks of a fixed size. These are stored in lists:

  • 2MB chunks that are mapped, and locked as they are being used to read or write data
  • Chunks that are mapped, but unused at the moment (this works like a most recently used cache, and saves the overhead of re-mapping)
  • Chunks that are not mapped
  • Chunks that are empty

Most C++ symbols fit within 256 bytes (note, of course, this is not always the case – something like Boost is famous for causing very long symbol names.) This means most chunks are subdivided into blocks 256 bytes long. To allocate or free is to mark one of these blocks as used or unused, which can be done by setting a single bit. Therefore, per two-megabyte chunk, one kilobyte of memory is enough to represent one bit per block and to allocate or deallocate blocks within a chunk. To find a free block, we can scan 32bits at a time in a single instruction, BitScanForward, which is handily an intrinsic in Visual C++.

This means that Visual Assist’s database now always has a 40MB impact on the Visual Studio memory space—twenty 2MB chunks are always mapped—plus a variable amount of memory often due to intermediate work, such as finding references, which can be seen in the charts below and which even in our most extreme stress test topped out at a couple of hundred megabytes. We think this is very low impact given the whole 4GB memory space, and given Visual Assist’s functionality.


There are a number of optimizations we introduced this release, including improvements for speedily parsing and resolving the type of type deduction / ‘auto’ variables, changes in symbol lookup, optimisations around template handling, and others – some of which we can’t discuss. I’d love to write about these but they verge into territory we keep confidential about how our technology works. We can say that things like deducing types, or template specialisation, are faster.

However we can mention some optimizations that aren’t specific to our approach.

The best fast string access code is code that never accesses strings at all. We hash strings and use that for string comparison (this is not new, but something Visual Assist has done for a long time.) Often it is not necessary to access a string in memory.

Interestingly, we find in very large codebases with many symbols (millions) that hash collisions are a significant problem. This is surprising because hash algorithms usually are good at avoiding collisions, and the reason is that we use an older algorithm, and one with a hash value only 32 bits wide. We plan to change algorithms in future.

We also have a number of other optimizations: for example, a very common operation is to look up the ancestor classes of a class, often resolving symbols on the way, and we have a cache for the results of these lookups. Similarly, we reviewed all database lookups for areas where we can detect ahead of time a lookup does not need to be done.

Our code is heavily multi-threaded, and needs to be performant, avoid deadlocks, and avoid contention. Code is carefully locked for minimal locking, both in time (duration) and scope (to prevent locking a resource that is not required to be locked, ie, fine-grained locking.) This has to be done very carefully to avoid deadlocks and we spent significant time analysing the codebase and architecture.

In contrast we also reduced our use of threads. In the past we would often create a new thread to handle a request, such as to look up symbol information. Creating a thread has a lot of overhead. Now, we much more aggressively re-use threads and often use Visual C++’s parallel invocation and thread pool (the concurrency runtime); the performance of blocking an information request until a thread is available to process it is usually less than creating a thread to process it.

Symbols are protected by custom read-write locks, which is written entirely in userspace to avoid the overhead of kernel locking primitives. This lock is custom written by us, and is based around a semaphore which we find provides good performance.


All these changes were done with careful profiling and measurement at each stage, testing with several C++ projects. These included projects with 2 million symbols, 3 million symbols, and 6 million symbols.

The end result is the following:

  • For very large projects – we’re talking games, operating systems, etc – you will see:
    • 50% less memory usage between low/high peaks doing operations such as Find References (average memory usage difference is less once the operation is complete; this is measured during the work and so during memory access.)
    • Identical performance to earlier versions of Visual Assist
  • For smaller projects:
    • Memory usage is reduced, though for a smaller project the effect is not so noticeable or important. But:
    • Improved performance compared to earlier versions of Visual Assist

In other words, something for everyone: you either get far less memory pressure in Visual Studio, with the same performance, or if memory was not an issue for you then you get even faster Visual Assist results.

Here are the results we see. We’re measuring the memory usage in megabytes after the initial parse of a project, and then after doing work with that project’s data—here, doing a Find References, which is a good test of symbol database work. Memory usage here is ascribed to Visual Assist; the total memory usage in the Visual Studio process is often higher (for example, Visual Studio might use 2GB of memory with a project loaded between itself and every other extension, but we focus here on the memory used by Visual Assist’s database and processing.)

One good test project is Unreal Engine:

Chart showing memory usage in an Unreal Engine project, with 28% less memory used after the initial parse, and 50% less after a Find References
Memory savings for Unreal Engine 4.26 (smaller is better):
Initial parse (180MB, vs 130MB now) and
Find References (302MB vs 152MB now)

Another excellent large C++ codebase is Qt:

Chart showing memory usage for Qt, with 40% less memory used after the initial parse, and also 40% less after a Find References
Memory savings for Qt (smaller is better):
Initial parse (424MB, vs 254MB now) and
Find References (481MB vs 291MB now)

Our feedback has been very positive: we often work with customers with very large codebases and significant memory usage from many sources inside Visual C++, and the lower impact of Visual Assist has made a noticeable difference. That’s our goal: be useful, and be low-impact.

We hope you’ve enjoyed reading this deep dive into the changes we made in Visual Assist build 2393. Our audience is you – developers – and we hope you’ve been interested to get some insight into the internals of a tool you use. While as the product manager I get to have my name on the post, it’s not fair because I did nothing: the credit goes to the engineers who build Visual Assist, especially Goran Mitrovic and Sean Echevarria who were engineers developing this work, and Chris Gardner our team lead, all of whom helped write this blog post.

A build of VA with the changes described in this post is available for download now, including trying it for free if you’d like. If you’re new to VA, read the quick start or browse our feature list: our aim is not to have the longest feature list but instead to provide everything you need, and nothing you don’t. We also do so with minimal UI – VA sits very much in the background. Our aim is to interrupt or distract you as little as possible while being there when you need it. You can see the kind of engineering we put into VA. It’s built by developers for developers.

Developer Showcase: Visual Assist in Action

If you follow our blog, you’ve seen the features that our team is putting in place and likely felt their impact in your development. Instead of hearing more of the same, we thought we would share thoughts from one of our users. Meet Distalsoft, two brothers, one who can’t stop gaming and one with ideas. We’re not sure which one wrote this post, but either way check them out when you want to hunt for treasure or lose it all attempting Brexit.

C++ with Visual Assist, Visual Studio and Unreal Engine

It’s inspiring that software like Microsoft’s Visual Studio IDE and Epic’s Unreal Engine have been “free” for so long now. It pains me to think about where indie developers would be without the forward thinking force of these larger companies and their seemingly never-ending generosity. Helping start-ups and the like to get their feet on the ground is something I will forever be grateful for.

Although the tools come close to it, it would be naive to think that they can fulfil the job of every requirement. I want to point out a few problems we were facing at Distalsoft when it came to developing in Visual Studio using Unreal Engine and the solution we ended up going with.

Unreal Engine at its core uses the C++ language. Visual Studio – being a Microsoft product – makes development in C# very enjoyable. On the other hand, development in C++ has been a point of frustration for many years. From VS 2010 to VS 2017, improvements have been made to the overall speed of compilation, Intellisense, file searching and the like, but it has taken until 2019 for them to really make a dent in the problem. I must say that VS 2019 has done an excellent job of addressing the aforementioned issues but the question still stands – could it be better?

A few years ago when I was using VS 2015, I’d had enough. I’d sometimes be waiting several minutes for Intellisense to bring back a list of methods or properties that belonged to an object. I’m pretty sure we’ve all done it – smashing the heck out of the ctrl+shift+space keys and cursing at the screen while waiting for the Intellisense to pop up. Or simple searches in the solution explorer that end up taking longer to resolve than simply finding the file yourself. Perhaps even trying to navigate to the definition of a method only to be waiting minutes for the document to load up. Something had to change. I went searching on the internet for a solution. It didn’t take long to stumble across a piece of software called Visual Assist by Whole Tomato. It had been recommended many times on various parts of the AnswerHub forum for Unreal Engine and StackOverflow. Due to it having a trial version, I downloaded a copy to give it a try.

The expression “night and day”, really doesn’t do the software justice but it’ll have to do for now. I was extremely relieved. Searching for files, definitions, even just including header files were now as you would expect from an IDE. The additional dialog menu that is available when right clicking on bits of code you want to perform actions against, has a variety of options that make you realise what was/is missing from the ootb VS. To be honest, I don’t use half of them, but it’s the baseline mechanics that just work so much better. And, to address my biggest issue – the speed of Intellisense – type-ahead now loaded within seconds and sometimes even instantly. What a relief!

Unreal Engine have improved their documentation, but unfortunately it’s still not quite there. I would always see people answer questions in AnswerHub with “Look through the Unreal Engine code, it’s open source”. I always assumed they were joking. Pressing F12 to go to method definitions without Visual Assist would take forever. Thanks to my new found friend Visual Assist, I finally had the ability to go find the answers to some of the most annoying questions. It’s hard to really communicate just how irritating it used to be. Seriously, Visual Assist has made me a happy C++ coder again.

I suppose the last thing to make note of is that Visual Assist is not currently free, but the trial is sufficient to make you realise just how much happier you can be when using it. I would be interested to see Visual Assist introduce a subscription based payment plan, but you can’t complain. They have done a stellar job at creating a brilliant tool.

So in conclusion, go check it out. See for yourself. You won’t be disappointed.

This blog was brought to you by Distalsoft. If you’d like for us to showcase what you’re building (better and faster) with Visual Assist contact us.

Visual Assist build 2031 is available

Greetings from San Francisco!

Thank you to all of the customers who stopped by our booth at the Build conference. It’s clear we have ardent supporters. We look forward to providing you with the debugger improvement we demo’d. Expect a beta in a few months. To the many first-time visitors, we hope you find Visual Assist as valuable as we led you to believe.

Build 2031 of Visual Assist was made available the first day of Build, to coincide with Microsoft’s announcement of universal solutions and shared projects. Although beta, Visual Assist build 2031 has support for the new features of Visual Studio 2013 Update 2 RC. We’ll blog separately to discuss the impact of shared projects on several features of Visual Assist, including Find References, Rename, and Change Signature.

Visual Assist build 2031 requires maintenance through 2014.03.31. The build includes an entirely new editor for VA Snippets. The editor is available only for Visual Studio 2010 and newer, but since VA Snippets are shared among IDEs, edit VA Snippets with your newest IDE and your changes will be available in any old code you still maintain. We already have changes planned for the editor before GA, including a redo of the type filters, but we’d love to hear suggestions for more.

If you use a dark theme in Visual Studio 2012 or 2013, you won’t need your sunglasses when our find dialogs open. Enough said.

Learn what’s new in build 2031, or skip directly to download the installer.