Bartłomiej Filipek Reveals Why He’s a Big Fan of Visual Assist

Bartłomiej Filipek Reveals Why He’s a Big Fan of Visual Assist

Bartlomiej Filipek is a software developer from the Polish city of Cracow. Bartek, as he prefers to be called, started coding when he was 14 years old, after reading “C++ in 24h”, and got his first real programming job in 2007. Bartek broad professional experience includes Native Windows Desktop Apps, OpenGL, Gamedev, BioFeedback games, .NET, large-scale app development, flight planning, and graphics driver coding. As a master’s student he also created lectures about OpenGL and gamedev. Since 2014 he has worked as a C++ software developer for Xara.

Bartek took time to tell us about his experience with Visual Assist, and why he’s a big fan.

When did you start using Visual Assist and how long have you been using it?

It’s hard to recall, but it was very early in my professional career. After using Visual Studio (VS 2003 and 2005) for some time, I knew I needed some more productivity enhancements. Visual Studio 2005or 2008 wasn’t best at that. I searched for solutions and found Visual Assist – which was amazing.

What was it like building software before you had Visual Assist?

Visual Studio was great for development, but it lacked some essential improvements like the refactoring helpers. I wasn’t able to efficiently rename my variables or member functions. And most importantly, the speed of Intellisense was relatively poor on large solutions. Sometimes I had to wait a couple of seconds to get the list of methods for my object.

How does Visual Assist help you create your applications?

First of all, I love many refactoring tools that I get with VA. I can quickly and safely rename my variables, functions, and much more. Another important part is code extraction and the ability to move the code back and forth between a header file or the implementation. Additionally, VA is super fast with even larger projects, and I can quickly get a list of member functions, function signatures, and much more. Recently, I have also been exploring VA Code Inspections which helps with code modernization.

What made Visual Assist stand out from other options?I think it’s at least three elements:

I think it has three elements:

  • Performance (as it’s super fast even for large solutions)
  • Lots of refactoring capabilities 
  • Lots of options to understand and move through code faster

What made you happiest about working with Visual Assist?

I like the overall speed of Virtual Assist, I can quickly jump around my code, see definitions, list member functions and even make notes with Hashtags.

What have you been able to achieve through using Virtual Assist to help create your applications?

I think I can write safer code (thanks to code inspections), avoid code smells (like large functions because I can quickly make them smaller with code extraction tools). And overall, it contributes to better quality and readability of my code.

What are some future plans for your applications?

At Xara – my main job – we plan to make some great enhancements to our powerful document online editor. For my other projects, especially blogs and educational content, I hope to experiment with some latest C++ 20 features and practice good modern C++ techniques.


Thank you, Bartek! You can find the link to Bartek’s blog here

https://www.wholetomato.com/downloads
How To Modernize With Visual Assist Part 2

How To Modernize With Visual Assist Part 2

In the previous article, you read about five popular techniques to improve your projects and apply several Modern C++ patterns. Here’s a list of five more things! We’ll go from the override keyword to nullptr, scoped enums, and more. All techniques are super-easy with Visual Assist!

1. Override

Let’s have a look at the following code with a simple class hierarchy:

using BmpData = std::vector<int>;

class BaseBrush {
public:
virtual ~BaseBrush() = default;

virtual bool Generate() = 0;

virtual BmpData ApplyFilter(const std::string&) { return BmpData{}; }
};

class PBrush : public BaseBrush {
public:
PBrush() = default;

bool Generate() { return true; }

BmpData ApplyFilter(const std::string& filterName) {
std::cout << "applying filter: " << filterName;
return BmpData{};
}

private:
BmpData m_data;
};

When you run VA code inspections, you’ll immediately see that it complains about not using the override keyword on those two virtual methods.

image1

In short, the override keyword allows us to explicitly specify that a given member function overrides a function from a base class. It’s a new keyword available since C++11.

As you might already know, you can go to VA Code Inspection Results and apply the fixes automatically. You’ll get the following code:

class PBrush : public BaseBrush {
public:
PBrush() = default;

bool Generate() override;

BmpData ApplyFilter(const std::string& filterName) override;
private:
BmpData m_data;
};

What are the benefits of using override?

The most significant advantage is that we can now easily catch mismatches between the virtual base function and its override. When you have even a slight difference in the declaration, then the virtual polymorphism will might not work. 

Another critical point is code expressiveness. With override, it’s effortless to read what the function is supposed to do.

And another one is being more modern as this keyword is also available in other languages like C#, Visual Basic, Java, and Delphi.

2. nullptr

When I work with legacy code, my Visual Assist Code Inspection Result is often filled with lots of the following items:

image3

This often happens with code like:

if (pInput == NULL) {
    LOG(Error, "input is null!")
    return;
}

or

pObject->Generate("image.bmp", NULL, NULL, 32);

Why does Visual Assist complain about the code? It’s because NULL is just a define and means only 0, so it doesn’t mean a null pointer, but it means 0. This is also a problem when you have code like:

int func(int param);
int func(float* data);
if you call:
func(NULL);

You could expect that the function with the pointer should be called, but it’s not. That’s why it’s often a guideline in C or early C++ that suggests not making function overrides with pointers and integral types.

The solution? Just use nullptr from C++11.

nullptr is not 0, but it has a distinct type nullptr_t.

Now when you write:

func(nullptr);

you can expect the proper function invocation. The version with func(float* data) will be invoked.

Not to mention that nullptr is a separate keyword in C++, so it stands out from the regular code. Sometimes NULL is displayed in a different color, but sometimes it is not.

Visual Assist makes it super easy to apply the fix, and it’s a very safe change.

3. Convert enum to scoped enum

Another pattern that is enhanced with Modern C++ is a way you can define enums.

It was popular to write the following code:

enum ActionType {
    atNone,
    atProcess,
    atRemove,
    atAdd
};

ActionType action = atNone;
Since C++11 it's better to you can define this type in the following way:
enum class ActionType {
    None,
    Process,
    Remove,
    Add
};

ActionType action = ActionType::None;

What are the benefits of such transformations?

  • They don’t pollute the global namespace. As you may have noticed, it was often necessary to add various prefixes so the names wouldn’t clash. That’s why you see atNone. But in the scoped enum version, we can write None.
  • You get strong typing, and the compiler will warn when you want to convert into some integral value accidentally.
  • You can forward scope enums and thus save some file dependencies.

What’s best about Visual Assist is that it has a separate tool to convert unscoped enums to enum classes, all with proper renames and changes in all places where this particular type was used. Right-click on the type and select “Convert Unscoped Enum to Scoped Enum.”. This opens a preview window where you can see and select which references will be replaced.

image2

Read more in Convert Unscoped Enum to Scoped Enum in the VA documentation.

4. Use more auto

One of the key characteristics of Modern C++ is shorter and more expressive code. You saw one example where we converted from for loops with long names for iterators into a nice and compact range-based for loops.

What’s more, we can also apply shorter syntax to regular variables thanks to automatic type deduction. In C++11, we have a “reused” keyword auto for that.

Have a look:

std::vector<int> vec { 1, 2, 3, 4, 5, 6, 7, 8};
std::vector<int>::const_iterator cit = vec.cbegin();

We can now replace it with:

std::vector<int> vec { 1, 2, 3, 4, 5, 6, 7, 8};
auto cit = vec.cbegin();

Previously, template-type deduction worked only for functions, but now it’s enabled for variables. It’s similar to the following code:

template <typename T> func(T cit) {
    // use cit...
}
std::vector<int> vec { 1, 2, 3, 4, 5, 6, 7, 8};
func(vec.cbegin()); // template deduction here!

Following are some other examples:

auto counter = 0;   // deduced int
auto factor = 42.5; // deduces double
auto start = myVector.begin(); // deduces iterator type
auto& ref = counter; // reference to int
auto ptr = &factor;  // a pointer to double
auto myPtr = std::make_unique<int>(42);
auto lam = [](int x) { return x*x; };

Have a look at the last line above. Without auto, it wouldn’t be possible to name a type of a lambda expression as it’s only known to the compiler and generated uniquely.

What do you get by using auto?

  • Much shorter code, especially for types with long names
  • Auto variables must always be initialized, so you cannot write unsafe code in that regard
  • Helps with refactoring when you want to change types
  • Helps with avoiding unwanted conversions

As for other features, you’re also covered by Visual Assist, which can automatically apply auto. In many places, you’ll see the following suggestions:

image5

This often happens in places like

SomeLongClassName* pDowncasted = static_cast<SomeLongClassName*>(pBtrToBase); 
// no need to write SomeLongClassName twice:
auto pDonwcasted = static_cast<SomeLongClassName*>(pBtrToBase);

or

unsigned int val = static_cast<unsigned int>(some_computation / factor);
// just use:
auto val = static_cast<unsigned int>(some_computation / factor);

As you can see, thanks to auto, we get shorter code and, in most cases, more comfortable to read. If you think that auto hides a type name for you, you can hover on it, and the Visual Studio will show you that name. Additionally, things like “go to definition” from Visual Assist work regularly.

5. Deprecated functionality

With C++11, a lot of functionality was marked as deprecated and was removed in C++17. Visual Assist helps with conversion to new types and functions.

For example, it’s now considered unsafe to use random_shuffle as it internally relied on simple rand(), which is very limited as a random function generator.

std::vector<int> vec { 1, 2, 3, 4, 5, 6, 7, 8 };
std::random_shuffle(vec.begin(), vec.end());

Visual Assist can replace the above code with the following:

std::shuffle(vec.begin(), vec.end(), std::mt19937(std::random_device()()));

Additionally, you’ll get suggestions to improve old C-style headers and convert them into C++ style:

image4

Some other handy fixes

We have covered a few inspections, but there’s much more! Here are some exciting items that you might want to see:

The whole list is available here: List of Code Inspections

Summary

Thanks for reading our small series on Clang Tidy, Code Inspections, and Visual Assist. We covered a lot of items, and I hope you learned something new. The techniques I presented in most cases are very easy to use, especially thanks to VA support, and you can gradually refactor your code into Modern C++.

https://www.wholetomato.com/downloads

How to Modernize C++ Code with Visual Assist in Five Easy Steps

How to Modernize C++ Code with Visual Assist in Five Easy Steps

You probably know that over time our projects seem to get old and legacy. Code written now might look suspicious five years later. With the standardization of C++11 in 2011, developers coined the term Modern C++. In this article (and the next one) we’ll take a look at some techniques you might apply to get nicer code and be closer to the best feature that recent versions of C++ offer. 

Let’s start! 

1. Rename and Aim for Meaningful Names You might be surprised by the first point on our list. 

Is rename added to C++11 as a feature? 

No, it’s definitely not just an element of C++. In fact, having good names is not a feature of any programming language, as it depends on your style. The compiler can take any names, including single letters and even almost any Unicode character. Thanks to better IDE support and powerful text editors, we can now avoid shortcuts and rely on full and meaningful names. 

And what’s most important is that we can leverage extra refactoring tools, like those from Visual Assist, to fix code with the legacy style. 

Let’s have a look at the following class: 

class PBrush { 
public: 
	PBrush(); 
	~PBrush(); 

	bool Gen(HDC r) { } 
	bool IsAval() const { return aval; } 

private: 
	bool aval; 
	
	HGLOBAL m_hGlobal; 
	LPBITMAPINFO lpBitmap; 
	LPBYTE lpBits; 
	HDC ReferenceDC; 
	
	RGBQUAD m_pal[256]; 
}; 

Read the above code and think about potential naming issues. Is the code clear and descriptive? What would you change? 

Or take a look at the following function, which uses this PBrush objects: 

std::vector<PBrush*> FilterAvalBrushes(const std::vector<PBrush*>& brs) { std::vector<PBrush*> o;
	ULONG totalBCount = 0; 
	size_t totalBitmaps = 0; 
	
	for (size_t i = 0; i < brs.size(); ++i) { 
		if (brs[i]->IsAval()) { 
			o.push_back(brs[i]); 
			++totalBitmaps; 
			totalBCount += brs[i]->GetByteCount(); 
		} 
	} 
	
	// make sure they have some bits ready: 
	for (size_t i = 0; i < o.size(); ++i) { 
		if (!o[i]->Bits()) 
			Log("ERROR, empty bitmap!"); 
	} 
	
	Log("Byte count %d, total bitmaps %d", totalBCount, totalBitmaps); return o; 
}

How could we improve the code? 

You might be tempted to say that it’s better to leave old code and not touch it since other systems might depend on it. However, adding or changing to better names and keeping it useful to other systems is very simple. In most cases, you can find and replace old names or use Rename from Visual Assist. 

Visual Assist has a powerful feature for renaming objects and making sure the code compiles after this transformation. It was also one of the earliest VA features and enabled using this refactoring capability years before Visual Studio added it. 

You can invoke the rename tool in many situations. 

The simplest way is to hit a key shortcut when your cursor is inside a particular name ( Shift + Alt + R by default). For example, let’s say that I’d like to rename bool aval, so I move my cursor there and invoke the dialog, and then I can see the following window: 

Another option is to write a new name, and then VA will show a handy context menu where you can invoke the rename or see the preview window before making the changes. 

What other things can you do with rename? 

Improve consistency of names. For example, your style guide might suggest adding m_ before each nonstatic data member for classes. So you might quickly improve it and add this prefix for classes that don’t share this style. In our example, some names start with m_ while others don’t. 

It’s not only variables and functions but also things like enumerations and namespaces. 

But most importantly, what name would be better for aval? Do you prefer m_isAvailable ? or Available ? I’ll leave it as an exercise. 

2. Extract Smaller Functions 

Here’s another example to fix: 

// number of data components 
switch ( format ) 
{ 
case GL_RGB: 
case GL_BGR_EXT: components = 3; break; 
case GL_RGBA: 
case GL_BGRA_EXT: components = 4; break; 
case GL_LUMINANCE: components = 1; break; 
default: components = 1; 
} 

GLuint t; 
glGenTextures(1, &t); 

// load data 
HBITMAP hBmp = (HBITMAP)LoadImage(NULL, fname, IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE | LR_CREATEDIBSECTION); 
if (hBmp == NULL) 
	return 0; 
	
// rest of code... long code

The above code is just a part of some longer function that loads, opens a file, loads bytes, and creates an OpenGL texture object with the loaded data. One thing that we might immediately do is to make this function smaller. More compact functions are easier to read and allow for more code reuse.  We can see that it has several smaller subtasks, so it would be better to extract them into separate parts. For example, we can implement FormatToComponents

And when you select this context menu, you’ll get the following dialog: 

After some adjustments (for example, we don’t need to take components as input argument), we can prepare the following code: 

int FormatToComponents(GLenum format) { 
	switch (format) { 
	case GL_RGB: 
	case GL_BGR_EXT: return 3; 
	case GL_RGBA: 
	case GL_BGRA_EXT: return 4; 
	case GL_LUMINANCE: return 1; 
	} 
	return 1; 
} 

And then call it from our initial function: 

GLuint LoadTextureFromBmp(const char *fname, GLenum format, GLuint filter) { const int components 
	= FormatToComponents(format);  

This not only gave us shorter, cleaner code, but also we can now mark our variable as const, which even improves the quality. Additionally, this function might be reused in some other places of the system. 

Here are the general rules for the extraction by VA: 

Symbols available globally, or within the class of the original and new methods, are not passed via a parameter list. 

Symbols used only in the right side of expressions (i.e., not modified) are passed by value. Symbols used in the left side of expressions (i.e., assigned) are passed by reference. Symbols referenced with dot notation within the extracted code (e.g., classes and structs) are always passed by reference. 

When a newly created method assigns a value to only one symbol, that symbol is passed by value and the assignment is done in the original method via return value. 

Symbols local to extracted code become local to the new method; they are not passed.

 3. Improve Function Interfaces 

Let’s now challenge ourselves with this one: 

bool Init2d(HDC hdcContext, FontMode fm, HFONT hFont, GLuint iWidth, GLuint iHeigth, GLuint iTexture, bool forceBold, bool forceItalic, size_t numGlyphs); 

It’s a member function in GLFont that is responsible for creating a texture font from a Window font. We can later use this created object to draw text in an OpenGL application. 

Isn’t it worrying that this function takes nine input arguments? 

And here’s an even more suspicious-looking call site: 

glFont.Init2d(const_cast<CGLApp *>(glApp)->GetHdc(), fmBitmap, hFont, iWidth, iHeight, 0, false, false, 256);

Some books suggest that if you have more than 5… or 7, then it’s some worrying code smell.  Such long lists of function arguments are hard to read, and caller sites might easily put them in a wrong order. Another issue is that there are a few boolean parameters, and it’s hard to see what they mean from the caller perspective. 

You can use Change Signature to modify all parts of a signature, including: 

  • Method name 
  • Return type 
  • Visibility 
  • Parameter names 
  • Parameter types 
  • Parameter order 
  • Number of parameters 
  • Qualifiers 

The plan for our function is to introduce a structure that would hold all parameters. We can do it by simply copying the parameter list and putting it into a structure above the function declaration: 

struct FontCreationParams { 
	GLuint iWidth; 
	GLuint iHeigth; 
	GLuint iTexture { 0 }; 
	bool forceBold { false}; 
	bool forceItalic { false}; 
	size_t numGlyphs { 256 }; 
}; 

As you can see, I even assigned some default values. 

And then we can change the signature in two steps: 

1. Add new argument to the function: const FontCreationParams& param with a default value of TODO. 

2. And then, again remove those extra arguments. 

Then you might compile code and fix call sites. 

The final code might look as follows: 

FontCreationParams params; 
params.iWidth = iWidth; 
params.iHeigth = iHeight; 
glFont.Init2d(const_cast<CGLApp *>(glApp)->GetHdc(), fmBitmap, hFont, params);

With this approach, if you need some more parameters (or you need to alter them), it’s just a matter of modifying the structure, and the function declaration will be the same. 

While you can do such a change manually, Visual Assist makes it much more comfortable and takes care of most of the cases at the call site. So this significantly simplifies and automates the process when you have many function calls. 

So far, we discussed basic principles for code modernization, but how about automation? Can we use extra tools that would tell us what to change in our code? 

Let’s see the fourth point. 

4. Enable Code Inspections 

Since a few years ago, Visual Assist has come with code Inspections that automatically check your code and pinpoint any issues. 

Code Inspection analyzes C/C++ for specific code quality issues as you edit. The feature, based on LLVM/Clang, warns you of issues in your code and if possible, suggests and applies quick fixes. 

You can enable them via Settings or as shown below: 

You can read the full description of Inspections in the excellent documentation website Introduction to Code Inspection and also see our previous article where we dive a bit deeper A Brief Introduction to Clang-Tidy and Its Role in Visual Assist – Tomato Soup. 

Currently,  we have almost 30 code inspections, and the number grows with each revision: List of Code Inspections. 

They range from checking .empty() vs size() , using emplace_back(), nullptr usages, and many more. They can help not only with code modernization but even with finding some performance issues in code. It’s like a small code analysis run on the code. 

Let’s  take a look at one related to for loops. 

5. Make For-Loops More Compact 

One of the coolest elements of C++11 style comes from using simpler for loops. This is especially important for code that uses containers from the Standard Library. 

For example, before C++11, you had to write: 

std::vector { }... 
for (std::vector<int>::iterator it = vec.begin(); it != vec.end(); ++it) std::cout << *it << '\n'; 

Now, this can be simplified with this simple syntax: 

std::vector<int> vec { 1, 2, 3, 4, 5, 6 }; 
for (const auto& elem : vec) 
	std::cout << elem << '\n';

Why not automate this transformation? 

VA identifies places where you can use this modern syntax and then rewrites loops for you. 

Consider this example: 

std::vector<PBrush*> FilterAvalBrushes(const std::vector<PBrush*>& brushes) { // ... 
	for (size_t i = 0; i < brushes.size(); ++i) { 
		if (brushes[i]->IsAval()) { 
			out.push_back(brushes[i]); 
			++totalBitmaps; 
		} 
	} 
} 

When code inspections are enabled, we can see the following suggestions:

What’s nice about Visual Assist is that it’s very smart and tries to deduce the correct loop syntax. 

For example, when our vector holds pointers, then the converted loop is as follows: 

std::vector<PBrush*> FilterAvalBrushes(const std::vector<PBrush*>& brushes) { // ... 
	for (auto brushe : brushes) { 
		if (brushe->IsAval()) { 
			out.push_back(brushe); 
			++totalBitmaps; 
			totalBCount += brushe->GetByteCount(); 
		} 
	} 
} 

But when we change the type into const std::vector<PBrush> (so it holds not pointer but “full” objects), then the loop is as follows: 

for (const auto & brushe : brushes) { 
	if (brushe.IsAval()) { 
		out.push_back(brushe); 
		++totalBitmaps; 
		totalBCount += brushe.GetByteCount(); 
	} 
}

The system is smart and avoids situations where you’d copy objects during loop iteration, as it’s more efficient to use references. 

That’s just a taste of what you can automatically do with Visual Assist and its code inspections! 

Summary 

In this article, you’ve seen five suggestions on how to check and improve your code. We started with something fundamental but often skipped: naming. Then we made functions smaller and with better interfaces. And at the end, we enabled code inspections so that Visual Assist can automatically identify more things to repair. 

In the last item, we looked at loop transformation, which is available from C++11. Thanks to Visual Assist, we can automate this task and quickly modernize our code to the newer standard 

After such refactoring, we aim to have safer, more readable code. What’s more, with more expressive code, we can help the compiler to identify potential errors. This is just a start, and stay tuned for the next article where you’ll see other code modernization techniques: override keyword, nullptr, enum class, auto type deduction, and deprecated headers.

A Brief Introduction To Clang-Tidy And Its Role in Visual Assist

As a developer, you probably know that parsing C++ source code is very complicated. This is also the reason why there might be fewer good tools for coding assistance than we have in other “lighter” languages. Fortunately,  parsing C++ source code has become much more straightforward over the past decade, mainly because of the Clang/LLVM infrastructure. Clang not only introduced a new excellent C++ compiler but also many extra tools that allow us to analyse C++ code efficiently.

Today we’ll meet one tool that can make our life much easier! It’s called clang-tidy.

We’ll cover this valuable utility as it’s also an important part for the Visual Assist internal code analysis system.

Let’s Meet clang-tidy

Here’s a concise and a brief description of this handy tool:

clang-tidy is a clang-based C++ “linter” tool. Its purpose is to provide an extensible framework for diagnosing and fixing typical programming errors, like style violations, interface misuse, or bugs that can be deduced via static analysis. clang-tidy is modular and provides a convenient interface for writing new checks.

As you can read above, this tool gives us a way to check our source code and fix common issues.

Let’s now see how you can install and work with this tool on Windows.

Installation on Windows

clang-tidy is attached with the main Clang package for Windows. You can download it from this site:

LLVM Download Page – version 11.0 (Nov 2020)

(Or also see other Releases: Download LLVM releases)

When Clang is installed, you can open your command line (for example, for this article I’m using PowerShell) and type:

clang-tidy --version

I get the following output:

PS D:\code> clang-tidy --version
LLVM (http://llvm.org/):
LLVM version 11.0.0
Optimised build.
Default target: x86_64-pc-windows-msvc
Host CPU: haswell

In general, clang-tidy operates on a single input file. You pass a filename, plus standard compilations flags and then extra flags about selected checks that will be performed.

Let’s run a simple test to see how it works.

Running a Simple Test

See this artificial code:

#include <vector>
#include <memory>
#include <iostream>
#include <string>

int main() {
    int a;

    std::string hello {"hello"};
    std::string world {"world"};
    if (hello + world == "")
        std::cout << "empty!\n";

    std::unique_ptr<std::string> ptr { new std::string { "abc" } };
    ptr.reset(new std::string{ "xyz" });
    ptr.reset(NULL);

    std::vector<int> vec{ 1, 2, 3, 4 };
    for (std::vector<int>::iterator it = vec.begin(); it != vec.end(); ++it)
        std::cout << *it << '\n';
}

Can you now play a game and list all of the “spurious” smells in the code? What would you improve here?

Let’s now compare your results with suggestions from clang-tidy.

I ran the tool with the following options:

clang-tidy --checks='modernize*, readability*' hello_tidy.cpp -- -std=c++14

On my machine, I get the those results:

As you can see above, I run it against all checks in the “modernise” and “readability” categories. You can find all available checks here: clang-tidy – Clang-Tidy Checks — Extra Clang Tools 12 documentation

You can also list all available checks for your version with the following command:

clang-tidy --list-checks

In summary the tool found the following issues:

  • suggestion to use trailing return type for main()
  • suggestion about using .empty() rather than comparing string with an empty string
  • make_unique
  • nullptr rather than NULL
  • modernising range based for loop
  • braces around single-line if and loop statements

Experimenting with Fixes

But there’s more!

clang-tidy can not only find issues but also fix them! A lot of checks have “auto-fixes”! By providing the -fix command, you can ask the tool to modify the code. For example:

clang-tidy --checks='modernize-* readability-container*' -fix hello_tidy.cpp -- -std=c++14

As you can see, this time I used “readability-container” only as I didn’t want to modify braces around simple if statements.

I got the following output:

clang-tidy nicely lists all the fixes that it managed to apply.

The final source code looks as follows:

#include <memory>   // oops!
#include <vector>
#include <memory>
#include <iostream>
#include <string>

auto main() -> int {
	int a;

    std::string hello {"hello"};
    std::string world {"world"};
    if (hello + world.empty())   // oops!
        std::cout << "empty!\n";

	std::unique_ptr<std::string> ptr { new std::string { "abc" } };
	ptr = std::make_unique<std::string>( "xyz" );
	ptr.reset(nullptr);

	std::vector<int> vec{ 1, 2, 3, 4 };
	for (int & it : vec)
		std::cout << it << '\n';
}

In the above code sample the tool managed to fix several issues, and we can decide if we want to apply all or maybe just select a few of them. For example, I’m not sure about using a trailing return type for all functions. Additionally, the tool couldn’t improve and apply make_unique in the place where we declare and initialise ptr. So hopefully, with each new revision, we’ll get even better results.

But also it’s important to remember that you have to be careful with the fixes.

See lines 1 and 12.

Clang-tidy added extra and duplicated header file “memory” (this was probably needed for make_unique()). This is not critical, but shouldn’t happen. But what’s worse is that at line 12 the code is now wrong.

The line if (hello + world == "") was changed into if (hello + world.empty()). This changes the meaning of the statement!

When applying fixes, be sure to review code and check if this is what you expected.

I hope you now get a general idea of how to use the tool and what its output is.

Running this tool from a command line might not be the best choice. Fortunately, there are many options on how we can plug it inside our build system or IDE.

For example, you can learn how to make use of it with CMake on a KDAB blog or… Clang-Tidy, part 1: Modernise your source code using C++11/C++14 – KDAB

Let’s now have a brief look at the options in Visual Studio.

Clang Tools in Visual Studio

clang-tidy support is available starting with Visual Studio 2019 version 16.4.

clang-tidy is the default analysis tool when using the LLVM/clang-cl toolset, but you can also enable it with MSVC compiler:

And you can configure it with a separate property panel:

And when you run a code analysis you can get results to your Error/Warning list:

Unfortunately, the current version of Visual Studio doesn’t offer a way to apply fixes, so you need to modify the code on your own.

Luckily, with the help of Visual Assist, you can change it. See below.

How Visual Assist Makes Things Much Safer And Easier

Visual Assist offers a service called “Code Inspection”, which is based on a standalone LLVM/Clang embedded into the VA system. You can enable it (even in Visual studio 2008!), and for our simple example you might get the following results (in a separate VA Code Inspection Window):

And what’s best is that for many of them you can apply a fix!

See the following context menu:

This is great! Thanks to the embedded LLVM/Clang subsystem, Visual Assist can perform the analysis and help you with many tasks related to code modernisation and fundamental code analysis!

But what’s best is that Visual Assist cleans up the output from clang-tidy and makes sure the fixes are correct and safe. Here’s the code after I applied all suggestions:

#include <vector>
#include <memory>
#include <iostream>
#include <string>

int main() {
	int a;

	std::string hello{ "hello" };
	std::string world{ "world" };
	if ((hello + world).empty())
		std::cout << "empty!\n";

	std::unique_ptr<std::string> ptr{ new std::string { "abc" } };
	ptr = std::make_unique<std::string>( "xyz" );
	ptr.reset(nullptr);

	std::vector<int> vec{ 1, 2, 3, 4 };
	for (int & it : vec)
		std::cout << it << '\n';
}

Nice!

As you can see there’s no extra include statement. And what’s most important is in line 12. VA added extra brackets, so the whole expression is now correct and safe!

Summary

In this article, we covered clang-tidy – a handy tool for code analysis that can (experimentally) fix your code automatically! The tool is quite verbose and might be hard to configure to work with large projects. In addition, please make sure you review code when applying fixes.

By default, you can download it and launch from a command line, but it’s much better to use it from Visual Studio (limited).

To get the best experience and safety have a look at the embedded clang-tidy inside Visual Assist – in the form of “VA Code Inspections”. This extra feature makes sure the results of code analysis are easy to read and meaningful, and the fixes are correct.

Today we only scratched the surface of this exciting topic. In two upcoming blog posts you’ll see some more use cases where Visual Assist can help you with code refactoring and modernisation (also leveraging embedded clang-tidy). Stay tuned.

For now you can read more in:

Technical Deep Dive: Reducing Memory Consumption in Visual Assist build 2393

Technical Deep Dive: Reducing Memory Consumption in Visual Assist build 2393

November 2020’s release of Visual Assist had some significant memory usage improvements, great for those with large projects. Here’s how we did it.

Written by David Millington (Product Manager), Goran Mitrovic (Senior Software Engineer) and Christopher Gardner (Engineering Lead)

Visual Assist is an extension which lives inside Visual Studio, and Visual Studio is a 32-bit process, meaning that its address space, even on a modern 64-bit version of Windows, is limited to a maximum of 4GB. Today 4GB doesn’t always go very far. A typical game developer, for example, may have a large solution, plus perhaps the Xbox and Playstation SDKs, plus other tools – already using a lot of memory – plus Visual Assist. A lot of Visual Assist is built around analysis of your solution, and that requires storing data about your source code, what we call the symbol database. Sometimes, with all of these things squashed into one process’ memory space together, some users with large and complex projects start running out of memory.

The latest version of Visual Assist (build 2393, 28 Oct 2020) reduces in-process memory consumption significantly. It’s a change we think will help many of you who are reading this, because those with large solutions who run into memory issues should find those memory issues alleviated or disappearing completely; and those with small or medium solutions may find Visual Assist runs a little faster.

Plus, it’s just part of being a good member of the Visual Studio ecosystem: as an extension relied on by thousands of developers, yet sitting in the same shared process as other tools, we should have as little impact as we can.

Chart showing memory usage from a Find References operation in the Unreal Engine source code. The new build of Visual Assist uses 50% the memory of the previous build (152 MB vs 302 MB)
Difference in memory usage running a Find References on the Unreal Engine 4.26 source code. Smaller is better.

The chart above shows that the new build of Visual Assist reduces VA’s memory usage by about 50% – that is, half as much memory is used. You can see more charts below, and we consistently see 30%, 40% and 50% memory savings.

We’d like to share some technical information about the approaches we took to achieve this, and we hope that as C++ or Windows developers you’ll find the work interesting. This is a fairly long blog, and starts off light and gets more technical as it goes on. It covers:

Concepts

Many readers may know this, but to follow this post here’s a quick primer on some concepts. Feel free to skip to the next section if you’re already familiar.

Address space refers to the total amount of memory an application can use. It’s usually limited by the size of a pointer (because a pointer points to memory): a 32-bit pointer can address 232 (about 4 billion) bytes, which is 4GB. So we say that a 32-bit process has a 32-bit address space, or an address space of 4GB. (This is hand-wavy – although true, on 32-bit versions of Windows this 4GB was split between kernel and user mode, and a normal app can only access usermode memory; depending on settings, this actually meant your app could access only 2GB or 3GB of memory before it could not allocate any more. On a 64-bit version of Windows, a 32-bit app has the entire 4GB available. Plus, there are techniques to access more memory than this even on 32-bit systems, which is getting ahead of where this blog post is going.)

This address space is a virtual address space; the application uses virtual memory. This means that the 4GB is not actually a sequential block of memory in RAM. Pieces of it can be stored anywhere, including on disk (swapped out) and loaded in when needed. The operating system looks after mapping the logical address, the address your application uses, to the actual location in RAM. This is important because it means that any address you use has what backs it, where it actually points to, under Window’s control.

A process is how your application is represented by the operating system: it is what has the virtual address space above, and contains one or more threads which are the code that runs. It is isolated from other processes, both in permissions and memory addresses: that is, two processes do not share the same memory address space.

If you’re really familiar with those concepts, we hope you’ll forgive us for such a short introduction. OS architecture including processes, threads, virtual memory etc is a fascinating topic.

Background – Where Visual Assist Uses Memory

Visual Assist parses your solution and creates what we call the ‘symbol database’, which is what’s used for almost every operation – find references, refactoring, our syntax highlighting (which understands where symbols are introduced), generating code, etc. The database is very string-heavy. It stores the names of all your symbols: classes, variables and so forth. While relationships between symbols can be stored with relatively little memory, strings themselves take up a lot of space in memory.

Our first work on memory optimization focused on the relationships, the links between data, and metadata stored for each symbol, and we did indeed reduce the memory they used. But that left a large amount of memory used by strings generated from source code. In addition our string allocation patterns meant strings were not removed from memory when memory became full and there was allocation pressure, but instead at predefined points in code, and that also increased the risk for large projects of getting out of memory errors.

Clearly we needed to solve string memory usage.

We researched and prototyped several different solutions.

String Storage Optimizations

Stepping back in time a little, string memory usage is obviously not a new problem. Over time, Visual Assist has used several techniques to handle string storage. While there are many possible approaches, here’s an overview of three, one of which Visual Assist used and two of which it has never used, presented in case they are new or interesting to you.

String interning addresses when there are many copies of the same string in memory. For example, while Visual Assist has no need to store the word “class”, you can imagine that in C++ code “class” is repeated many times throughout any solution. Instead of storing it many times, store it once, usually looked up by a hash, and each place that refers to the string has a reference to the same single copy. (The model is to give a string to the interning code, and get back a reference to a string.) Visual Assist used to use string interning, but no longer: reference counting was an overhead, and today the amount of memory is not an issue itself, only the amount of memory used inside the Visual Studio process.

While we’re discussing strings, here are two other techniques that are not useful for us but may be useful for you:

  • To compress strings. Visual Assist never used this technique: decompression takes CPU cycles, and strings are used a lot so this would occur often.
  • To make use of common substrings in other strings: when a string contains another string, you can refer to that substring to save memory. A first cut might use substrings or string views. This is not useful for us due to the overhead of searching for substrings, which would be a significant slowdown, and the nature of the strings Visual Assist processes not leading to enough duplication of substrings to make it worthwhile. A more realistic option (in terms of performance, such as insertion) for sharing string data than shared substrings would be to share data via prefix, such as storing strings in tries. However, due to the nature of the strings in C++ codebases, our analysis shows this is not a valuable approach for us.

The approach we took to the recent work was not to focus on further optimising string storage, but to move where the string storage was located completely.

Moving Out Of Process: Entire Parser

Visual Assist is loaded into Visual Studio as a DLL. The obvious approach to reducing memory pressure in a 32-bit process is to move the memory – and whatever uses it – out of process, which is a phrase used to mean splitting your app up into multiple separate processes. Multi-process architectures are fairly common today. Browsers use multiple processes for security. Visual Studio Code hosts extensions in a helper process, or provides code completion through a separate process.

As noted in the primer above, a second process has its own address space, so if we split Visual Assist into two, even if the second was still a 32-bit process it could double the memory available to both in total. (Actually, more than double: the Visual Studio process where Visual Assist lives has memory used by many things; the second process would be 100% Visual Assist only. Also, once out of Visual Studio, we could make that second process 64-bit, ie it could use almost as much memory as it wanted.) Multi-process architectures can’t share memory directly (again hand-wavy, there are ways to read or write another process’s memory, or ways to share memory, but that’s too much for this blog); generally in a multi-process architecture the processes communicate via inter-process communication (IPC), such as sockets or named pipes, which form a channel where data is sent and received.

There were two ways to do this. The first is to move most of the Visual Assist logic out to a second process, with just a thin layer living inside Visual Studio. The second is just to move the memory-intensive areas. While the first is of interest, Visual Assist has very tight integration with Visual Studio – things like drawing in the editor, for example. It’s not trivial to split this and keep good performance. We focused instead on having only the parser or database in the second process.

Prototyping the parser and database in a second process, communicating with IPC back to the Visual Studio process, showed two problems:

  • Very large code changes, since symbols are accessed in many places
  • Performance problems. Serializing or marshalling data had an impact. Every release, we want VA to be at least the same speed, but preferably faster; any solution with a performance impact was not a solution at all

Moving Just Memory

Multi-process is not a required technique, just one approach. The goal is simply to reduce the memory usage in the Visual Studio process and move the memory elsewhere; we don’t have to do that by moving the memory to another process—and this is a key insight. While multiprocess is a fashionable technique today, it’s not the only one for the goal. Windows provides other tools, and the one we landed on is a memory mapped file.

In the primer above, we noted that virtual memory maps an address to another location. Memory-mapped files are a way to map a portion of your address space to something else – despite the name, and that it’s backed by a file, it may not be located on disk. The operating system can actually store that data in memory or a pagefile. That data or file can be any size; the only thing restricted is your view of it: because of the 32-bit size of your address space, if the mapped file is very large you can only see a portion of it at once. That is, a memory mapped file is a technique for making a portion of your address space be a view onto a larger set of data than can fit in your address space, and you can move that view or views so, although doing so piece by piece, ultimately a 32-bit process can read and write more data than fits in a 32-bit address space.

Benchmarking showed memory mapped files were significantly faster than an approach using any form of IPC.

The release notes for this build of Visual Assist referred to moving memory ‘out of process’. Normally that means to a second process. Here, we use the term to refer to accessing memory through a file mapping, that is, accessing memory stored by the OS, and potentially more memory than can fit in the process’ address space. Referring to it as out of process is in some ways a holdover from when the work was planned, because we had initially thought we would need multiple processes. But it’s an accurate term: after all, the memory is not in our process in a normal alloc/free sense; it’s an OS-managed (therefore out of process) resource into which our process has a view.

Note re ‘memory mapped files were significantly faster than … using any form of IPC’: sharing memory mapped files can be a form of IPC as well – you can create the mapped file in one process, open in another, and share data. However, we had no need to move logic to another process – just memory.

Chunk / Block Structure and Allocation / Freeing

We already mentioned that the data we stored in the memory mapped file is strings only. In-process, we have a database with metadata fields; in the mapped view are all strings. As you can imagine any tool that processes source code uses strings heavily; therefore, there are many areas in that mapped memory we want to access at or near-at once.

Our implementation uses many file mappings to read and write many areas at once. To reduce the number, each is a fixed size of 2MB. Every one of these 2MB chunks is a fixed heap, with blocks of a fixed size. These are stored in lists:

  • 2MB chunks that are mapped, and locked as they are being used to read or write data
  • Chunks that are mapped, but unused at the moment (this works like a most recently used cache, and saves the overhead of re-mapping)
  • Chunks that are not mapped
  • Chunks that are empty

Most C++ symbols fit within 256 bytes (note, of course, this is not always the case – something like Boost is famous for causing very long symbol names.) This means most chunks are subdivided into blocks 256 bytes long. To allocate or free is to mark one of these blocks as used or unused, which can be done by setting a single bit. Therefore, per two-megabyte chunk, one kilobyte of memory is enough to represent one bit per block and to allocate or deallocate blocks within a chunk. To find a free block, we can scan 32bits at a time in a single instruction, BitScanForward, which is handily an intrinsic in Visual C++.

This means that Visual Assist’s database now always has a 40MB impact on the Visual Studio memory space—twenty 2MB chunks are always mapped—plus a variable amount of memory often due to intermediate work, such as finding references, which can be seen in the charts below and which even in our most extreme stress test topped out at a couple of hundred megabytes. We think this is very low impact given the whole 4GB memory space, and given Visual Assist’s functionality.

Optimizations

There are a number of optimizations we introduced this release, including improvements for speedily parsing and resolving the type of type deduction / ‘auto’ variables, changes in symbol lookup, optimisations around template handling, and others – some of which we can’t discuss. I’d love to write about these but they verge into territory we keep confidential about how our technology works. We can say that things like deducing types, or template specialisation, are faster.

However we can mention some optimizations that aren’t specific to our approach.

The best fast string access code is code that never accesses strings at all. We hash strings and use that for string comparison (this is not new, but something Visual Assist has done for a long time.) Often it is not necessary to access a string in memory.

Interestingly, we find in very large codebases with many symbols (millions) that hash collisions are a significant problem. This is surprising because hash algorithms usually are good at avoiding collisions, and the reason is that we use an older algorithm, and one with a hash value only 32 bits wide. We plan to change algorithms in future.

We also have a number of other optimizations: for example, a very common operation is to look up the ancestor classes of a class, often resolving symbols on the way, and we have a cache for the results of these lookups. Similarly, we reviewed all database lookups for areas where we can detect ahead of time a lookup does not need to be done.

Our code is heavily multi-threaded, and needs to be performant, avoid deadlocks, and avoid contention. Code is carefully locked for minimal locking, both in time (duration) and scope (to prevent locking a resource that is not required to be locked, ie, fine-grained locking.) This has to be done very carefully to avoid deadlocks and we spent significant time analysing the codebase and architecture.

In contrast we also reduced our use of threads. In the past we would often create a new thread to handle a request, such as to look up symbol information. Creating a thread has a lot of overhead. Now, we much more aggressively re-use threads and often use Visual C++’s parallel invocation and thread pool (the concurrency runtime); the performance of blocking an information request until a thread is available to process it is usually less than creating a thread to process it.

Symbols are protected by custom read-write locks, which is written entirely in userspace to avoid the overhead of kernel locking primitives. This lock is custom written by us, and is based around a semaphore which we find provides good performance.

Results

All these changes were done with careful profiling and measurement at each stage, testing with several C++ projects. These included projects with 2 million symbols, 3 million symbols, and 6 million symbols.

The end result is the following:

  • For very large projects – we’re talking games, operating systems, etc – you will see:
    • 50% less memory usage between low/high peaks doing operations such as Find References (average memory usage difference is less once the operation is complete; this is measured during the work and so during memory access.)
    • Identical performance to earlier versions of Visual Assist
  • For smaller projects:
    • Memory usage is reduced, though for a smaller project the effect is not so noticeable or important. But:
    • Improved performance compared to earlier versions of Visual Assist

In other words, something for everyone: you either get far less memory pressure in Visual Studio, with the same performance, or if memory was not an issue for you then you get even faster Visual Assist results.

Here are the results we see. We’re measuring the memory usage in megabytes after the initial parse of a project, and then after doing work with that project’s data—here, doing a Find References, which is a good test of symbol database work. Memory usage here is ascribed to Visual Assist; the total memory usage in the Visual Studio process is often higher (for example, Visual Studio might use 2GB of memory with a project loaded between itself and every other extension, but we focus here on the memory used by Visual Assist’s database and processing.)

One good test project is Unreal Engine:

Chart showing memory usage in an Unreal Engine project, with 28% less memory used after the initial parse, and 50% less after a Find References
Memory savings for Unreal Engine 4.26 (smaller is better):
Initial parse (180MB, vs 130MB now) and
Find References (302MB vs 152MB now)

Another excellent large C++ codebase is Qt:

Chart showing memory usage for Qt, with 40% less memory used after the initial parse, and also 40% less after a Find References
Memory savings for Qt (smaller is better):
Initial parse (424MB, vs 254MB now) and
Find References (481MB vs 291MB now)

Our feedback has been very positive: we often work with customers with very large codebases and significant memory usage from many sources inside Visual C++, and the lower impact of Visual Assist has made a noticeable difference. That’s our goal: be useful, and be low-impact.

We hope you’ve enjoyed reading this deep dive into the changes we made in Visual Assist build 2393. Our audience is you – developers – and we hope you’ve been interested to get some insight into the internals of a tool you use. While as the product manager I get to have my name on the post, it’s not fair because I did nothing: the credit goes to the engineers who build Visual Assist, especially Goran Mitrovic and Sean Echevarria who were engineers developing this work, and Chris Gardner our team lead, all of whom helped write this blog post.

A build of VA with the changes described in this post is available for download now, including trying it for free if you’d like. If you’re new to VA, read the quick start or browse our feature list: our aim is not to have the longest feature list but instead to provide everything you need, and nothing you don’t. We also do so with minimal UI – VA sits very much in the background. Our aim is to interrupt or distract you as little as possible while being there when you need it. You can see the kind of engineering we put into VA. It’s built by developers for developers.

Visual Assist Build 2393 is Here!

We’re really making an effort to keep blogs more instructional or industry related, but man o’ man is this release worth a few lines. Our team has been working behind the scenes for much of the year on performance improvements. And sure, we could’ve just done a few things here and there over time to make it feel like we were consistently improving (which we are), but with this release we decided we were ready to go all out.

Essentially we parse all of your source code and store that information in a database. Of course the larger your code base, the larger the database. And this can bottleneck you pretty bad in Visual Studio. So we’ve moved a portion of the database out of process decreasing the available memory we consume inside of Visual Studio and increasing the available memory. Yes, this is an over simplification of what we’ve done and doesn’t do justice to our developers, but download the latest, give it a whirl and let us know what you think.

Of course we didn’t stop there. We’ve updated our code inspection tool, LLVM/Clang, to the latest and greatest. We also added Code Inspection for performance-inefficient-algorithm clang checker. And we added more suggestions for UE4 macros. You can check out our full release notes here and be sure to download. Remember, maintenance must be current as of 2020.10.28.

Whether you’ve been using Visual Assist for a while or you’re brand new, you’re going to notice a difference with this build. Happy Coding!

Prevent debugger from stepping into unwanted functions in Visual Studio

When you’re in a debugging session, sometimes the debugger can visit a lot of trivial functions or code from third-party libraries. In Visual Studio and also in Visual Assist, there are ways to filter out call stack events so that you can focus just on the critical code path.

Read on to find out how.

Background

As you may already know, step into, step over, and step out functions are essential while debugging. However, it would be very time consuming to visit all of the functions. Sometimes debugger might lead you to methods that are not important from your perspective.

Let’s see an example:

#include <iostream>
#include <string>

class Param {
public:
    Param(const std::string& str, int val) : mStr(str), mVal(val) { }

    std::string GetStrVal() const {
        return mStr + "..." + std::to_string(mVal);
    }

private:
    std::string mStr;
    int mVal;
};

void CallBackFunction(Param p1, Param p2) {
    std::cout << p1.GetStrVal() << "\n";
    std::cout << p2.GetStrVal() << "\n";
}

int main() {
    CallBackFunction({ "Hello", 1 }, { "World", 2 });
}

In the above example you can find a simple code that creates a named integer parameter into a separate type: Param. When you run the code, it should output the following:

Hello...1
World...2

The problem is that we have an ellipsis ... rather than the colon : that we wanted in the first place. You can run the debugger and see where this output comes from.

Set a breakpoint at the line where CallBackFunction is called and then press F11 to try to step into the function. Since the code is relatively simple, we can just step into every procedure and investigate the program flow. Where will the debugger go at the start? I see something like this (assuming you have “Just My Code” disabled):

(Just My Code is available for C++ since Visual Studio 2017 15.8. See this post for more information: Announcing C++ Just My Code Stepping in Visual Studio | C++ Team Blog.)

In the above example, debugger goes to the next instruction after the breakpoint. In our case, it’s a constructor of std::string!

While sometimes it might be interesting to look into the internals of the Standard Library, it’s probably not the best place to look for the solution to our problem with the string output.

Imagine what happens if you have several parameters. When you want to step into a method, you’ll first need to visit all code related to the creation of the parameters. The whole process might be frustrating, and you would lose a lot of time before going into the target function. Of course, you might just set a breakpoint at the beginning of the destination code; this will skip all of the unwanted behavior. But there is a better option.

What if you could control and filter out unwanted functions?

Filtering in Visual Studio

Before VS 2012 it was relatively tricky to filter out the code. Usually, it involved playing with some registry values. Fortunately, since Visual Studio 2012, this useful feature has been much improved.

All you need to do is edit the default.natstepfilter XML file. It’s usually located in:

// VS 2015
Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Packages\Debugger\Visualizers

// VS 2019:
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\Packages\Debugger\Visualizers

This is the same for VS 2015, and for VS 2013 and VS 2012.

To add more functions that will be filtered out, you can use regex expressions:

In our example, when we want to skip all the functions from the std:: namespace, we can write the following:

std\:\:.*  

and add this line to default.natstepfilter:

<Function>
    <Name>std\:\:.* </Name>
    <Action>NoStepInto</Action>
</Function>

Please note that the file might be blocked for writing. The file default.natstepfilter is loaded each time a debugger session starts.

You can now set up the most common functions in your project that you don’t want to visit during debugging. Usually, those are trivial functions or from third-party code (std, boost?). Remember not to filter out too much!

In the latest versions of Visual Studio, you can also benefit from “Just My Code”, which won’t enter code that is not “your” module. That means system calls, the Standard Library, ATL, MFC, and others. You can even set your third-party libraries.

While each revision of Visual Studio makes it easier and easier to control the debugging flow, there’s an even faster and more straightforward solution.

See below:

Even more with Visual Assist

In Visual Assist, you have a separate window that allows you to see all debugger events and filter them during the debugging session!

You can show it from Visual Assist -> Debug -> VA Step Filter. You also have to have debugger integration enabled (turned on by default).

For our example, the first time I run a debugger and press F11, it will still step into the constructor of std::string. But the VA Step Filter window will show the following output:

As you can see in the Debug Events section, Visual Assist lists: breakpoints, step into, step out, and step over actions.

I can now click on std::string::basic_string... and then the next time debugger enters the same event, it will skip it and move forward. In our case, it will step into the constructor of our Param class.

By default, VA has several predefined filters for the shared Windows libraries. You can also control the filters on a global level (all projects) or per solution.

What are the main advantages?

  • It’s more visual than text files and XML files you have to edit for native debugger settings.
  • You can easily switch the event while you’re in a debugging session. In our case, I can enable it for the first argument evaluation for CallBackfunction but disable it for the second argument—no need to restart the session.
  • It has more control than “Just My Code” because sometimes you can skip an unwanted function from your application code. For one debugging session, you might want to get to the solution faster by skipping some places.
  • You can control the filter per solution level.

Try it out and see more information on the separate page: VA Step Filter

References

Fly over the IDE Gaps

Going Remote or Crazy?

If you’re like most, you’re reading this from the comfort (hopefully) of your home office. We hope whatever you’re doing, you’re being safe and smart about it. Our staff lucked out as most were already working from home when all this started. So the transition has been fairly easy, with the exception of having more distractions and noise. But, we know the transition for companies and teams can be difficult.

Over the last month we’ve been working on a way to make licensing for remote teams easier. Consider this our official launch of Concurrent Licensing for Standard License users!

Concurrent Licensing will allow companies to purchase a number of concurrent active user licenses and have teams or individuals access a license at any time. Licenses will be “checked out” upon startup and “returned” after closing Visual Assist, freeing up the license to be used by another employee. This is especially helpful when teams jump from one project to another that may or may not require Visual Assist.

We hope concurrent licensing will allow for easier management and greater flexibility for remote employees that may work different office hours or be traveling to hunker down with family.

For more information about making the switch or adding a concurrent pool to your existing licenses, contact your account manager today.

Visual Assist Build 2375

  • NEW Added Code Inspection for modernize-deprecrated-headers clang checker.
    • Some headers from C library were deprecated in C++ and are no longer welcome in C++ codebases. This check replaces C standard library headers with their C++ alternatives and removes redundant ones.
  • NEW Added MakeShared/MakeUnique Smart Suggestions for UE4 smart pointer initialization.
  • NEW Change Signature on UE4 RPC/BlueprintNative methods update the implementations.
  • NEW Reduced memory required in very large solutions.
    • You’re not the only one that’s noticed larger solutions causing memory exhaustion. We’re doing our part to reduce our memory consumption.
  • NEW Code Inspection engine updated to LLVM/Clang version 10.
    • Again, keeping up with the times and updating as we’re able. While this won’t impact you much today, this sets us up for more to come in the future.

Check out the full release notes and be sure to download. Remember, maintenance must be current as of 2020.05.09 

How to add notes and navigation metadata directly in source code in Visual Studio

Comments in code might not only be some text floating around the functions, variables and classes, but they might contain some extra semantic information. With this improvement, you can navigate through projects much faster or even organize your knowledge. In this blog post, I’ll show you two ways on how to add extra metadata to comments in Visual Studio.

Intro

Navigating through a large codebase might be a complicated task. It might be especially an issue when you have big projects (not to mention legacy systems) where logical parts are spread across many different files.

In Visual Studio offers many tools that help with moving between headers, declarations, class hierarchies or all references of a given symbol. But what if you’d like to put a “todo” item? Or some extra note? Such supplementary information can not only help with quick tasks but also might build knowledge about a system.

Here are the things you might want to use to help in Visual Studio

  • Task List
  • Hashtags (as an extra plugin)

Let’s start with the first one.

Task Lists

Visual Studio contains a feature that enables us to add metadata directly in comments; It’s called Task List. Take a look at this piece of code from my legacy project:

class ShaderProgram
{
private:
    GLuint mId;
    std::vector<Shader *> mShaders; // refactor: 
                                    // convert to 
                                    // smart pointers!
public:
    // todo: implement other special member functions!
    ShaderProgram();
    ~ShaderProgram();

As you can see above, I put keywords like refactor: or todo: inside comments.

Visual Studio builds an index of all comments with those special keywords and shows them in a separate window:

Visual Studio task list

This is a handy way of managing simple activities or just taking some small notes for the future. What’s more, the refactor keyword is a custom marker. Visual Studio adds flexibility to set it in the environment settings.

Here’s the link to the documentation Use the Task List – Visual Studio | Microsoft Docs

The task list is a nice improvement! The metadata lives inside comments so that other developers can pick up the same information. Still, you cannot easily transfer custom keywords, and the task window offers only basic support. For example, it doesn’t group things (like grouping all of the to-do lines).

Is there anything better?

Hashtags in Visual Assist

For several years I’ve been a happy user of Visual Assist – which is an excellent tool for enhancing various aspects of Visual Studio (have a look at my previous blog posts here or here). The tool also has a powerful feature called Hashtags. This is a combination of named bookmarks, task list and tags that you might know from social networks.

Take a look at the example from my project with extra notes:

/// creates and can build GLSL programs, #shadersSystem
class ShaderProgram
{
private:
    GLuint mId;
    std::vector<Shader *> mShaders; // #refactor
                                    // convert to 
                                    // smart pointers 
public:
    // #refactor #ruleOfZero implement 
    // other special member functions!
    ShaderProgram();
    ~ShaderProgram();

As you can see, this is just regular source code, but please notice those words proceeded with #. I’m using them to mark places that might be worth refactoring. For example, notes about refactoring:

std::vector<Shader *> mShaders; // #refactor 
                                // convert to smart pointers

Or another example:

/// creates and can build GLSL programs, #shadersSystem

This time I’ve used #shadersSystem which groups elements that are crucial to the handling of OpenGL Shaders in my animation application.

Below you can see all of the tags that are displayed in the VA Hashtags window. Similarly to Visual Studio, Visual Assist scans the source code and presents the indexed data.

Hash Tags Window

In the picture above, you can see lots of different tags that I put throughout the code, for example:

  • #GlutCallbacks – they refer to all callbacks that I passed to the GLUT framework (for my OpenGL Windows Application). With all of the tags grouped under one item, I can quickly move between those functions.
  • #refactoror #modernize – things to improve later.
  • Other “notes” that refer to some subsystems like #uiTweaks, #mainLoop and others.

The screenshot also shows a considerable difference compared to the task window. The hashtags are grouped, you can also filter them, or hide according to the project name, directory or even a filename.

Tags are created “on the fly”, there’s no need to predefine them in some configuration window. You just type # and some extra text (you can configure the minimum length of a tag, it’s 3 by default).

Tags are stored directly in the source code, so other developers can immediately see the same information (assuming they also use Visual Assist). Such functionality can be used to create even some simple task manager when you can assign a task by the name of a developer:

// #bartekToDo: please improve this code! don't use goto!

Here are some more things you can do with them

  • Cross-reference other tags. You can write see:#otherTag and Visual Assist will build an extra list per tag that you can check.
  • When you type # Visual Assist with autocomplete tags, so you can easily refer to existing hashtags
  • When you hover over a hashtag in the hashtags window, you’ll see a tooltip with the source code that’s near the tag; this allows to get a better orientation without actually moving to the code.
  • And many more!

Here’s a great video that summarises the tool:

Summary

In this short blog post, I wanted to describe two tools that you can use to add extra information to comments. Having little todo: comments or additional notes can enhance your daily work and build a handy list of actions for the future, or even build up the knowledge about the system.

Depending on your preferences, you can keep those extra notes permanently in your code or just temporarily and gradually move the knowledge and todo actions into external documentation or task manager.

In my daily work, I prefer VA hashtags over regular bookmarks, as they are easier to use in most cases and shows that extra information and structure. You can read more about VA Hashtags on their documentation page: Visual Assist Hashtags.

Back to you

  • Do you use some extra tools that help you with code navigation?
  • Do you use Visual Studio task window and task comments?
  • Have you tried VA hashtags?

This blog was brought to you by Bartlomiej Filipek, you can check out his C++ blog at bfilipek.com