Saturday, 20 March 2021

Building .NET code generators

.NET 5 is finally live and there are a number of amazing new things that came out with it. You can read the announcement post here: https://devblogs.microsoft.com/dotnet/announcing-net-5-0/

But one of the things that I've been waiting for a very long time and was most excited by was the GA release of code generators! I've been waiting for something like this for years, and was super excited about the early announcements and releases, but didn't really get a chance to play with them much. Plus, the early releases were a bit tricky to use, and the tooling was very much lacking, so I didn't apply them in any projects until now

The general principle is simple - the code generators are somewhat similar to analyzers in behavior, they will run as part of the code analysis / build, but the main difference is that you get the ability to add new source files to your compilation on the fly. There are some caveats though, some of them being:

  • The process is additive only - you can't modify or delete existing code using code generators (at least for the moment), you can only add new code
  • All code generators will run in parallel with each other, and can't inspect each other's generated output. This means you can't have code generators analyze code that was created by a different generator

This may change in the future, as source generators evolve and improve.

So let's see what it takes to generate some code!

(Read More)

Sunday, 07 February 2021

Making Windows open applications on a specific monitor

When you have multiple monitors Windows will generally open applications on the main one, but that might not always be the case. Furthermore, sometimes you want a specific app to always open on your second monitor, leaving the primary one free.

For a long time I tried to find any way to configure that, and when I didn't find anything I assumed there was no way to control that. But a while ago I stumbled upon a SuperUser question, and one of the answers was exactly what I was looking for.

This doesn't seem to be common knowledge, so I decided to post about it here both to spread the knowledge, and for my own records for the time when I will inevitably forget this once again (this is not something you have to configure often and it is the third time I went on a search for this key sequence)

To change which monitor an app should open by default, do the following:

  • Ensure all instances of the app are closed. (this step might not be required, but I've noticed that without it it sometimes refuses to work)
  • Open your program.
  • Move it to the monitor on which you would like it to open by default.
  • Hit the Windows key + Shift + Enter.
  • Close the application, and open it again - it should now open on the correct monitor

Friday, 17 January 2020

Deploying ClickOnce Apps with SDK style projects

ClickOnce is a bit of an old beast these days, but in some cases you may have to keep an old app updated, so you have to dust off the old tools. I ended up in a similar situation, and I thought to see what can I do to ease some of the pain of the process. I had an old ClickOnce app that was written back in the .NET 4.5 time, with some old code, and old project format.

While I know that ClickOnce doesn't support .NET Core apps, would it support the new SDK style projects? Let's try it out.

(Read More)

Wednesday, 15 January 2020

Setting up TLS access to Active Directory over LDAP

I needed to enable access to my AD using LDAP, but didn't want to use unencrypted connections. Active Directory supports TLS connections, but for this you usually need to install the Enterprise Root CA (some details on Technet here), which is WAY more than I needed.

Looking more into this, I learned that I don't need a full CA, I'd just need a certificate installed on the domain controllers, and that would be enough. Luckily, since I already setup my own CA previously, I could just use it to set up my own certificate and install them on the DCs.

(Read More)

Tuesday, 16 July 2019

Running .NET Core apps on Raspberry Pi Zero

Recently I wrote a small app that I was planning to run on my Raspberry Pi Zero, since it's incredibly compact! With .NET Core now being cross platform, I thought it was a simple affair, so I compiled and deployed the app, targeting the linux_arm runtime, and tried to run it on the device, only to be met with this:


./ConsoleApp
Segmentation fault

I thought something was wrong, so I tried it on a newer Raspberry Pi, and everything worked great.

A quick search later and I learned that the Raspberry Pi Zero uses an armv6 processor, while .NET Core JIT depends on armv7 instructions.

I thought that this was the end of it, but upon doing a little more research, I found that the Mono framework can actually run on Raspberry Pi Zero!

The solution was simple: re-compile the app in Framework Dependent - Portable mode, and copy it onto the device. Instead of having a native library, you end up with your app's dll file.

Then on the Raspberry Pi Zero install mono, and run your app!


sudo apt update
sudo apt upgrade
sudo apt install mono-complete
mono ./ConsoleApp.dll
Hello World!

Success! I hope this helps someone else that ran into the same issue

Setting up MAAS and Kubernetes in a virtualised environment

Setting up MAAS with Kubernetes and persistent storage in a Hyper-V environment, with an existing network and DHCP server. Unfortunately there is limited documentation on running MAAS in an existing network with a DHCP server. Futhermore, there's little to no mention of Hyper-V support.

While it's not the recommended environment, recently I thought to try spin up a Kubernetes clusted in a network, and it seemed like MAAS is now the recommended way to deploy it. Since the existing network was running a Hyper-V cluster, I decided to see how hard would it be to spin up MAAS on top of Hyper-V machines. After experimenting, and several full wipes and clean starts, I ended up with a redundant Kubernetes cluster and distributed storage nodes using Ceph. I decided to outline the process to install and configure it, as well as some things I learned in the process

(Read More)

Sunday, 14 July 2019

Private CA Part 2: Issuing certificates

In the first part, I outlined how to create a new root and an intermediate Certificate Authority using OpenSSL. Once these are created, we can get to the fun part of creating certificates we'll be using for signing web server responses, documents, assemblies etc.

Each certificate has a number of fields that describe it. There are some core fields, like the Serial Number, the Validity periods, Subject, Issuer, Thumbprint etc. These are also extended fields that describe the usage constraints for the certificate - for example, you could create a certificate that can only be used to sign web responses from a specific domain. You could create a certificate that could be used to sign e-mails and documents.

You could just create a certificate that doesn't have any restrictions, and could be used for just about anything. That would be ok for testing purposes, but it's generally recommended to create certificates for individual purposes. This way, if a certificate's private key is compromised, you will only have to reconfigure a single application with a new certificate.

(Read More)

Private CA Part 1: Building your own root and intermediate certificate authority

Getting an SSL certificate these days has become much easier than it was in the past, with the availability of free Certificate Authorities (CAs) like Let's Encrypt. But even so, there are scenarios when you need a certificate that couldn't be issued by them: longer term certificates, complex wildcards, local addresses within your environment, and even routers that are accessed by IP instead of a dns name. Some of these could be issued by a paid CA, others aren't even an option. Code signing certificates are also great, but not cheap, while encryption and authentication certs are generally only issued in enterprise environments.

Getting a self-signed certificate is pretty easy - most routers will generate their own certificates, and it's pretty straightforward to create your own certificate using openssl or similar tools. The problem with self-signed certificates is that they won't be trusted by default. You still get the benefit of your connection being encrypted, but there won't be a guarantee that nobody intercepted your data, altered it and signed it with their own untrusted cert, unless you check the certificate every time. You could always add your certificate to your local trust store, but you'd have to do that for every single certificate you create, on every device you access them, which will quickly become cumbersome.

The solution is simple - you can create your own private CA and add it to your trust store. Any certificates created by that CA would be trusted as well, which makes managing this considerably easier! You wouldn't use these certs on your public website, but they'd be perfect for internal services or your home lab.

Taking one step further, you could also create intermediary CAs, creating a trust chain - the end device certificates would be created by your intermediary CA. If your intermediary CA keys get compromised, you could just revoke them and create a new intermediary, and won't need to update the trust store on your machines.

In these articles I'll put down what I learned while creating my own CA. I've decided to break this down into several parts, to make it easier to digest and manage:

(Read More)

Sunday, 19 May 2019

Adding license details to NuGet packages

I've always tried to make sure I add license details to my open source projects, especially when publishing them to NuGet. Previously, this was done by adding a <licenseUrl> element to your .nuspec file, which would allow users to see license details when downloading packages.

When using the csproj 2017 format (so all .NET Core projects), you could have the dotnet pack command automatically build your nuget package. To populate the license url, you just had to add the following to your project file:

<PropertyGroup>
  <PackageLicenseUrl>https://opensource.org/licenses/MIT</PackageLicenseUrl>
</PropertyGroup>

Starting sometime in 2018, I noticed that my builds started throwing a new warning:

warning NU5125: The 'licenseUrl' element will be deprecated. Consider using the 'license' element instead.

I just assumed that the nuspec format was altered, and the dotnet tool will eventually catch up. But I should have known better - of course the change is deeper than that. Half a year later, the warning is still there, so I decided to check it out, and found this issue on GitHub discussing it: https://github.com/NuGet/Home/issues/7509

Which led me to their wiki, describing the changes to the nuspec/nupkg files regarding adding license details: https://github.com/NuGet/Home/wiki/Packaging-License-within-the-nupkg

Instead of just a single URL, you could either add a license file and point the package to that, or you could add a license expression, where you could describe a combination rule using several well known licenses. In the same article, they also described how to update your csproj file to use the new license field.

In my case, to link to the MIT license, I had to replace the PackageLicenseUrl field with the following:

<PropertyGroup>
  <PackageLicenseExpression>MIT</PackageLicenseExpression>
</PropertyGroup>

Moral of the story? Never assume, and check the docs more thoroughly

Thursday, 01 March 2018

Deploying cross platform images to Docker registries

One thing I noticed when working with docker and cross platform registries was that sometimes you can pull the same image tag from a remote registry and get different images depending on which platform you requested. It was certainly working in a different way than the local list of images! Digging deeper I learned that it wasn't something new, and it was up for close to half a year now! You can read the official announcement here: https://blog.docker.com/2017/09/docker-official-images-now-multi-platform/

Basically, when you try to pull an image from a repository, your client would actually pull a manifest file listing either the details of the image, or a list of images that can be pulled based on the local machine's cpu architecture, os platform and version! This way you can pull the same image on various machines, and have it running on just about any platform you want. This is especially powerful with the release of Docker for Windows 18.03 where you can run both Windows and Linux images on the same machine side by side!

The manifest format is quite simple, and easy to digest! For example, this is the manifest for my Hello World app that I created to test multi platform docker deployments:

(Read More)

Running cross platform containers on Windows in Docker 18.03

With the recent release of Docker for Windows 18.03, I decided to finally start experimenting with it. One of the main features that this release brings is the ability to run both Windows and Linux images side by side, instead of having to switch Docker from Linux to Windows mode. The other benefit is that it runs all the images using the Windows Container infrastructure, instead of running the linux images in a Linux VM on your machine (which had to have some of your cpu and memory permanently allocated to it). This means that unless you're running a container, docker will hardly use any resource at all on your machine! All this combined makes docker for windows considerably more attractive!

There are some issues in this release (at this moment it's part of the edge channel, so it's to be expected), but it's a great step forward.

To start things off, at the moment Docker won't try to auto-detect what platform to use when running the image. Instead, it will always assume you want to run the image in the default system platform, unless specified otherwise. For example, on my Windows 10 machine the following command will pull a windows image:

docker pull hello-world

To get a linux image, you'd have to add the --platform=linux argument:

docker pull --platform=linux hello-world

Now this might not be what you want, and I was wondering if there's a way to change this. Unfortunately I haven't found it documented anywhere obvious, but I eventually stumbled on a GitHub discussion, which pointed me in the right direction. All you need is to set the DOCKER_DEFAULT_PLATFORM environment variable to linux or windows to change the default platform that your cli will use! In my case I switched the default to linux straight away.

(Read More)

Wednesday, 21 February 2018

The woes of setting up and deploying TX Text control

Recently I've had to set up and configure TX TextControl, and I found this to be quite a bit more challenging than I expected it to be. Mainly this was because of the serious lack of quality documentation. That's not to say that they don't have docs, it's just that it's... confusing and not very well structured. It took me some time and a number of trials and errors to finally have it set up and running on the server. I thought it might be useful for me to share my experience with this, and provide a short and simple summary of the configuration I ended up with.

(Read More)

Monday, 19 February 2018

Adding Upsert support for Entity Framework Core

Like many others, I have used Entity Framework and have been a fan of the simplicity it allows in accessing database. It's quite powerful and can be used to execute a large variety of queries. Any queries that can't be expressed using Linq syntax I usually move to a stored procedure or a function and call that from EF. One thing that I've always moved to a stored procedure was the Upsert command.

Actually, I've never called it upsert until recently when I stumbled upon a reference to it. Since the database engine I've worked with most is SQL Server, I've used the MERGE statement to execute an atomic (not really) UPDATE/INSERT, and it looks something like this:

MERGE dbo.[Countries] AS [T]
USING ( VALUES ( 'Australia', 'AU' ) ) AS [S] ( [Name], [ISO] )
    ON [T].[ISO] = [S].[ISO]
WHEN MATCHED THEN
    UPDATE SET
        [Name] = [S].[Name]
WHEN NOT MATCHED BY TARGET THEN
    INSERT ( [Name], [ISO] )
    VALUES ( [Name], [ISO] );

Other databases that I started working with recently have similar syntax available. For example, in PostgreSQL, one could use the INSERT … ON CONFLICT DO UPDATE syntax:

INSERT INTO public."Countries" AS "T" ( "Name", "ISO" )
VALUES ( 'Australia', 'AU' )
ON CONFLICT ( "ISO" )
DO UPDATE SET "Name" = 'Australia'

I thought that it would be interesting to see whether this would be possible to do in Entity Framework directly, rather than having to write it in SQL. Out of the box EF doesn't support it, even though there is interest in adding it, and there's even an issue on EF Core's GitHub project discussing this. But the concept itself is simple, so I thought it would be an interesting project to play around with.

(Read More)

Saturday, 01 June 2013

Using local .aar Android library packages in gradle builds

Since Gradle became the new build system for Android there were a lot of questions popping up all over the net about how to use it. The new build system comes with a number of great features like multi project builds, android archive packages (.aar) for android libraries and so on. Unfortunately, since the new build system is quite fresh (version 0.4.2 at the moment), the documentation is rather limited, so not everything is clear and simple.

For example, if you have a solution with an android library and an android app (that depends on the library), your build will work just fine. But what if you want to decouple the library, keep it separate so that you can use it in other projects or share it with the community? The Gradle build system will package it as an android archive package (.aar) and you can add that as a dependency to your projects. The only problem is that referencing .aar packages locally doesn't work very well, and it seems like that's by design. As explained by +Xavier Ducrichet in this comment:

using aar files locally can be dangerous. I want to look at either detecting issues or putting huge warnings.

This means that to add a reference to an .aar package it it would have to ideally be stored in the central maven repository (now that maven central finally supports android archive packages!). But what if that's not an option, for example if the library you're referencing is in development?

(Read More)

Saturday, 25 May 2013

Referencing android library packages in Gradle

UPD: Seems like referencing local .aar packages is not recommended. But you can just as easily set them up in a local maven repo, which will work even better!

Using local .aar Android library packages in gradle builds


Playing with the new Gradle Android build system, I created some multi project setups, and it seems to work great! I had a project with a main android app, an android library and a java library all wired up and working well.

But once I tried to decouple the android library to a separate location, and just inject the .aar package in the project depencency list I ran into a problem. The project completely refused to build, stating:

:mainapp:packageDebug
Error: duplicate files during packaging of APK D:\Development\MyProject\mainapp\build\apk\mainapp-debug-unaligned.apk
        Path in archive: AndroidManifest.xml
        Origin 1: D:\Development\MyProject\mainapp\build\libs\mainapp-debug.ap_
        Origin 2: D:\Development\MyProject\mainapp\libs\AndroidExtensions.aar

Everything seemed to be configured correctly, the android library was producing a proper .aar library package file, and I was sure that it should work out of the box, but it was just refusing to work...

The solution was actually much simpler than I expected:

(Read More)

Pages: 1 2 3