Skip to content

Create a Linux VM in Azure

This article will walk through how you can create a Linux VM in Microsoft Azure.

Login in to the Azure Portal using your account, create one for free if you do not currently have one – see this short article on how to do that.

Click Create a resource in the top-left.

Create a resource

Select Ubuntu Server 18.04 LTS (most current at time of writing) from the list of popular resources, if it is not shown then use the search bar to find it.

The Basics tab for Create a virtual machine will be shown. From here, the subscription should default to that assigned to your account, if needed then create a new resource group.

Create a new resource group

Instance Details

Give the virtual machine a name, e.g. UbuntuWebVM. Select a region close to you, note that some regions such as UK West do not allow creation of VMs so choose one that is geographically close that does allow creation.

Click Change size and select an appropriate configuration. When I compared the specs and prices for the cheapest VM sizes I noticed that B1S looked to have the lowest cost as the cost for B1LS was displayed as unavailable. After a quick Google search I found these two links; the first states the suitability of B1LS as a web server and the second lists the costs – note that the costs are in GBP.

Info on the B1LS VM size
https://azure.microsoft.com/en-us/updates/b-series-update-b1ls-is-now-available/

Azure VM cost (make sure to select the correct OS, Region and Pricing period)
https://azure.microsoft.com/en-gb/pricing/details/virtual-machines/linux/

Given that the price of the B1LS is half that of the B1S in addition to the former having a suggested target workload suited for a web server, I opted for this size.

The cost is deducted from the free credit on your account, assuming your account is still eligible, which is should be if you’ve just created it.

Administrator Account

Enter a username for the VM administrator.

For authentication, the default option is SSH public key. SSH is more secure than password and it is a common way of connecting to Linux servers, like Windows RDP but at a command line level.

To use SSH, it is necessary to create a SSH key pair. The key pair consists of a public key which is kept on the server and a private key which is kept on your computer – if the keys match then the connection can be established.

See Create SSH key pair in Azure Cloud Shell for guidance on how to create your key pair.

View the contents of the public key:

cat /home/john/.ssa/id_rsa.pub

Copy the contents, including the ssh-rsa, making sure that you do not miss any characters and do not include any trailing white space. Paste this into the SSH public key section.

If the key has been pasted successfully, you should see a green tick to the right-hand side.

Detailed Options

Click Next to configure Disks, the default is Premium SSD. Either leave it as the default or you could change this to Standard SSD, it may save a few pence on the cost.

Click Next for Networking. To make sure that server can be contactable and serve web pages, it is necessary to carry out some network configuration.

A vnet (virtual network) is created by default so that can be left as-is. For the website(s) on the server to be available on the internet (anywhere outside of the vnet), a public IP address is needed. This should also be created by default.

The NIC (network interface card) network security group (NSG) is like a firewall but locks down traffic only allowing through what you define.

Open SSH as an inbound port so that we can connect to the VM.

Click Next for Management, go with the defaults. Note the auto-shutdown time, change this if desired.

Click Next for Advanced and Next again for Tags.

Finally, click Next to review the settings. Once the validation passes, click create to get the VM up and running.

Confirmation will be displayed once the VM has been deployed. Here you can see the various resources used with/by the VM.

VM completed deployment

The VM is now available for use and we’ll cover how to connect to it and configure it as a web server in following articles.

Create a SSH key pair in Azure

Creating a SSH key pair is straight forward and should only take a moment by following these simple steps.

To get started, log in to Azure Portal and click the Cloud Shell button on the top bar.

Cloud Shell icon

If this is the first time that you’ve used the Cloud Shell then you’ll be asked whether you want to use Bash or Powershell, for this we will be using Bash. It is simple to change this to Powershell and back in the future via the drop down at the top of the Cloud Shell window.

Cloud Shell prompt

Next, Azure will need to create some persistent storage which will be used to store scripts, config and SSH key pairs.

At the Bash prompt, type ssh-keygen to start the key generation process. Press Enter to accept the default option for the file but do enter a passphrase when prompted. After a few seconds, the SSH key pair will be created.
ssh-keygen

That’s all there is to it! 🙂

Create a http to https URL redirect in IIS with Powershell

If you are hosting a website on IIS and would like your visitors to connect securely via https, whether they specify that in their browser or not, then there are a few steps you need to take.

First of, you need to install your SSL certificate into IIS.

Then install the URL Rewrite IIS module/extension which can be obtained from Microsoft here: https://www.iis.net/downloads/microsoft/url-rewrite

To ensure that a secure connection is used we can create a http to https redirect rule. This will mean that when someone types in the URL http://dbaland.wordpress.com, the web server will automatically redirect them to https://dbaland.wordpress.com.

This rule can be manually created but to help save time and ensure consistency, the following Powershell can be used.

$webname= 'dbaland'
$rulename = $webname + ' http to https'
$domain = '.wordpress.com'
$inbound = '(.*)'
$outbound = 'https://{HTTP_HOST}{REQUEST_URI}'
$site = 'IIS:\Sites\' + $webname + $domain
$root = 'system.webServer/rewrite/rules'
$filter = "{0}/rule[@name='{1}']" -f $root, $rulename

#Match URL
#stopProcessing not applicable for redirects although with rewrite, it will stop further rules from running
Add-WebConfigurationProperty -PSPath $site -filter $root -name '.' -value @{name=$rulename; patterSyntax='Regular Expressions'; stopProcessing='True'}
Set-WebConfigurationProperty -PSPath $site -filter "$filter/match" -name 'url' -value $inbound
#Conditions -> Logical Grouping
Set-WebConfigurationProperty -PSPath $site -filter "$filter/conditions" -name '.' -value @{input='{HTTPS}'; matchType='0'; pattern='^OFF$'; ignoreCase='True'; negate='False'}
#Action
Set-WebConfigurationProperty -PSPath $site -filter "$filter/action" -name 'type' -value 'Redirect'
Set-WebConfigurationProperty -PSPath $site -filter "$filter/action" -name 'url' -value $outbound

In the above code, specify $webname – the first part of the URL e.g. dbaland, and then $domain – the second part of the URL e.g. .wordpress.com

When this is executed on the web server, the rule is automatically created and can be seen by double-clicking the URL Rewrite icon in IIS.



It is also necessary to ensure that “Require SSL” is not enabled, make sure this is not set by double-clicking the SSL icon.

This can be tested by browsing to https://dbaland.wordpress.com and observing that the after the website has loaded, the URL is now https://dbaland.wordpress.com

The newly created rule is written into the websites web.config file. This can be seen by browsing to the website folder on the web server and editing the web.config, the following should be visible:

<configuration>
<system.webServer>
<handlers accessPolicy=”Read, Execute, Script” />
<rewrite>
<rules>
<rule name=”ebsIntel-tribalcollege http to https” stopProcessing=”true”>
<match url=”(.*)” />
<conditions>
<add input=”{HTTPS}” pattern=”^OFF$” />
</conditions>
<action type=”Redirect” url=”https://{HTTP_HOST}{REQUEST_URI}” />
</rule>
</rules>
</rewrite>
</system.webServer>
</configuration>

 

My DevOps Journey, The Start

I plan to document steps in my new role as a DevOps Engineer from the principles, practises and techniques, through to researching and learning new tools and how we implement all this in the company I work for.

My career in IT started as an Analyst Programmer working in VB6, several further development positions followed creating applications in VB.Net and then C# along with MS SQL Server. I then started working with servers as a Senior Server Analyst, this was followed by some years as a DBA before joining Tribal as a Development Engineer. Four years later I was promoted to Engineering Team Lead and a further two years down the line sees me making another change.

My recent roles have been orientated around databases and servers covering Oracle and MS SQL Server on Windows and RHEL platforms. The work has been varied and has included:

  • server administration
  • estate management
  • physical and virtual servers
  • TFS for source control and builds
  • facilitating test automation with PowerShell and Test Execute
  • developing REST APIs with dotnet Core and Entity Framework

Much of what I have done has given me a decent foundation on which to build upon and the experience I have gained will be invaluable I’m sure.

There will be plenty for me to learn which is just what I love so I’ll be hoping to share some of that learning journey here as I go.

Posts will cover various aspects in parallel, for example, I’m likely to write a high level piece about what DevOps is alongside a technical document on Git – the concepts and how it is used.

I’m not going to get into the nuances of DevOps and how it is not a role or a team, that is something for another time. For the purposes here I shall be using the term DevOps to describe my role, the team I work in and how we use it to further the success of the company.

I’m excited to be starting this new chapter – working on a new product and collaborating with some talented, motivated and personable developers and engineers.

Git repo and Visual Studio 2017

When using Git as the source control system within Visual Studio, if you use a different location to create your local repos than the default it can be a pain to amend it each time. This post will show how the default repo location can be updated…

The default location for Git repos in Visual Studio is:

C:\users\<user name>\Source\Repos

If you want to change this then follow these steps:

Open Team Explorer

Click the Home button

Then click Settings

Then Global Settings

Now set the Default Repository Location to a folder of your choice

Click Update

Hope that helps saves some time and removes one tedious task when using Git with Visual Studio.

If you have any useful suggestions for Visual Studio, Git or Azure DevOps then I’d love to to hear from you so leave a comment below.

Ansible – Part 1

Ansible is one of several tools that can be used for configuration management, this post provides some notes on the various roles that Ansible can perform as well as how it works. For an introduction to Puppet, take a look at my post – “Puppet – Introduction“.

So, what is Ansible – what does it do? It can take care of:

  • Change Management
  • Provisioning
  • Automation
  • Orchestration

Change Management

Define the system state i.e. what the system is meant to look like.

Ansible can be used to enforce the system state. For example, a web server may have the following definition:

  • Apache web installed
  • Apache web at version x.x.xx
  • Apache web started

If Ansible detects that system has changed then a “change event” is triggered to:

  • put the system back (to the defined state)
  • mark the system as changed

The next step is to determine why the system has changed.

Ansible employs idempotence – the function executed is idempotent if the system state remains the same after repeated applications as it was after a single application.

Provisioning

Systems can be prepared and made ready for use taking them from one state to another.

This process is different from cloning a virtual machine as Ansible installs and configures fresh each time. The following steps may be typical in taking a fresh server from just having the OS installed to being a functional web server:

  • Install web server software
  • Copy configuration files
  • Copy website files
  • Install security updates
  • Start the web service

Automation

Firstly, define the tasks to be executed automatically, they should be ordered the tasks should make decisions. The tasks could be ad-hoc but they may still be suitable for automation.

It should be possible to set and forgot the tasks once configured for automation.

Orchestration

Automation is on just one system but orchestration co-ordinates the automation across multiple systems such as:

  • Firewalls
  • Web servers
  • Middleware
  • Load balancers
  • Database servers

This is handled by the Ansible Control Server.

 

Part 2 will touch on reasons to use Ansible and a few of it’s characteristics with subsequent posts covering architecture and how to create a test Ansible environment.

*nix – check disk space

If you’ve tried to use ls -lh to get the size of a directory and it’s content, you’ll have found that it doesn’t give you what you were hoping for.

One method to get the size of a directory, including files, sub-directories and their files is to use du (disk usage).

du -sch

The switches in the above example are:

-s
(–summarize)
displays only a total for each argument.

-c
(–total)
prints a total of all arguments after they have been processed, e.g. the total size used by directories and files.

-h
(–human-readable)
prints the size in an easily readable format such as 12M and 2GB.

Here we can see the result of using du at the same directory level as the ls example:

You can also get a breakdown of directory sizes by omitting the -s switch:

There are many other switches for du, take a look at man du for a list along with an explanation.

If you have any views on using du or maybe you have an alternative preferred method, please let me know in the comments section below.

More user admin in *nix

Following on from a much earlier post about user administration in Solaris, I have found a few other tasks that may be fairly common requirements.

To change a users login name
usermod -l <new username> <old username>
e.g. usermod -l john.knight john.knoght

“-l” tells usermod that we want to amend the login name.

This is useful if, like me, you made a typo when creating an account 🙂

Add user to supplementary group
usermod <username> -a -G dba
e.g. usermod john.knight -a -G dba

“-a” append the user to the supplementary group, only used with “-G”
“-G” list of groups separated by a comma, or a single group. Must be uppercase.

List the group membership for a user
There are a couple of ways you can check this;

groups <username>
e.g. groups john.knight
would return something like…
<john.knight> :  oinstall dba

id <username>
e.g. id john.knight
would return something like…
uid=35364 (john.knight) gid=35364 (oinstall) groups=35364 (oinstall), 35365 (dba)

Delete a group
If you have created a group which is no longer required, it is easy enough to delete it.

groupdel <group name>
e.g. groupdel oldgroup

Rename logical & physical MSSQL files

This post will provide guidance on how to amend the logical and physical file names of a MSSQL database.

When a copy of a database is restored as a new database, the logical file names will remain the same as the source database.

Firstly, check the current logical and physical file names:

USE master
GO
SELECT name          AS [Logical_name],
       physical_name AS [File_Path],
       type_desc     AS [File_Type],
       state_desc    AS [State]
FROM sys.master_files
WHERE database_id = DB_ID(N'Database_name')
GO

Running this query against a database called ‘SSMATEST’ on one of my database servers brings back the following:

Logical_name File_Path                File_Type State
DIRUT        D:\Data\DIRUT.mdf     ROWS      ONLINE
DIRUT_log    D:\Logs\DIRUT_log.ldf LOG       ONLINE

As can be seen, the physical names and logical names don’t match up the name of the database.

Let’s start with the logical names…

ALTER DATABASE [SSMATEST] MODIFY FILE (NAME='DIRUT', NEWNAME='SSMATEST');
GO
ALTER DATABASE [SSMATEST] MODIFY FILE (NAME='DIRUT_log', NEWNAME='SSMATEST_log');
GO

We use pass the current name of the logical file – NAME – and then name that we wish to use as the new name – NEWNAME.

The changes can be verified by running the query at the beginning of the post, the results will show:

Logical_name File_Path                File_Type State
SSMATEST     D:\Data\DIRUT.mdf     ROWS      ONLINE
SSMATEST_log D:\Logs\DIRUT_log.ldf LOG       ONLINE

So, that’s starting to look better, let’s move on to the physical file names.

First, take the database offline, thanks to Perry Whittle for suggesting the use of one ALTER DATABASE statement to achieve the same result as two!

It should be pointed out that you will need to carry this out during a maintenance window if the database is part of a live/production system.

ALTER DATABASE [SSMATEST] SET OFFLINE WITH ROLLBACK IMMEDIATE;
GO

Now rename the files from DIRUT.mdf and DIRUT_log.ldf to SSMATEST.mdf and SSMATEST_log.ldf in the file system via File Explorer or DOS. Once that is done, return to SSMS.

Update the records in the system catalog.

ALTER DATABASE [SSMATEST] MODIFY FILE (Name='SSMATEST', FILENAME='D:\Data\SSMATEST.mdf')
GO
ALTER DATABASE [SSMATEST] MODIFY FILE (Name='SSMATEST_log', FILENAME='D:\Logs\SSMATEST_log.ldf')
GO

Check the message to ensure that there were no problems.

The file "SSMATEST" has been modified in the system catalog. The new path will be used the next time the database is started.
The file "SSMATEST_log" has been modified in the system catalog. The new path will be used the next time the database is started.

Bring the database back online.

ALTER DATABASE [SSMATEST] SET ONLINE;
GO

Again, use the query at the top of the post to verify the changes are all good.

Logical_name File_Path                File_Type State
SSMATEST     D:\Data\SSMATEST.mdf     ROWS      ONLINE
SSMATEST_log D:\Logs\SSMATEST_log.ldf LOG       ONLINE

There we have it!

Both the logical and physical file names have been updated to reflect the name of our database.

If you are new to T-SQL then I recommend checking out this book from the “Sams Teach Yourself” series: