• Programming

What Does It Take to Make a Kernel?


The kernel this . The kernel that . People often refer to one operating system's kernel or another without truly knowing what it does or how it works or what it takes to make one. What does it take to write a custom (and non-Linux) kernel?

So, what am I going to do here? In June 2018, I wrote a guide to build a complete Linux distribution from source packages , and in January 2019, I expanded on that guide by adding more packages to the original guide. Now it's time to dive deeper into the custom operating system topic. This article describes how to write your very own kernel from scratch and then boot up into it. Sounds pretty straightforward, right? Now, don't get too excited here. This kernel won't do much of anything. It'll print a few messages onto the screen and then halt the CPU. Sure, you can build on top of it and create something more, but that is not the purpose of this article. My main goal is to provide you, the reader, with a deep understanding of how a kernel is written.

Once upon a time, in an era long ago, embedded Linux was not really a thing . I know that sounds a bit crazy, but it's true! If you worked with a microcontroller, you were given (from the vendor) a specification, a design sheet, a manual of all its registers and nothing more. Translation: you had to write your own operating system (kernel included) from scratch. Although this guide assumes the standard generic 32-bit x86 architecture, a lot of it reflects what had to be done back in the day.

The exercises below require that you install a few packages in your preferred Linux distribution. For instance, on an Ubuntu machine, you will need the following:

Note: I'm going to simplify things by pretending to work with a not-so-complex 8-bit microprocessor. This doesn't reflect the modern (and possibly past) designs of any commercial processor.

When the designers of a microprocessor create a new chip, they will write some very specialized microcode for it. That microcode will contain defined operations that are accessed via operation codes or opcodes . These defined opcodes contain instructions (for the microprocessor) to add, subtract, move values and addresses and more. The processor will read those opcodes as part of a larger command format. This format will consist of fields that hold a series of binary numbers—that is, 0s and 1s. Remember, this processor understands only high (the 1s) and low (the 0s) signals, and when those signals (as part of an instruction) are fed to it in the proper sequence, the processor will parse/interpret the instruction and then execute it.

Here's the rundown of the command structure for the made-up processor:

Now, what exactly is assembly language? It's as close to machine code as you can get when programming a microprocessor. It is human-readable code based on the machine's supported instruction set and not just a series of binary numbers. I guess you could memorize all the binary numbers (in their proper sequence) for every instruction, but it wouldn't make much sense, especially if you can simplify code writing with more human-readable commands.

This make-believe and completely unrealistic processor supports only four instructions of which the ADD instruction maps to an opcode of 00 in binary code, and SUB (or subtract) maps to an opcode of 01 in binary. You'll be accessing four total CPU memory registers: A or 00, B or 01, C or 10 and D or 11.

Using the above command structure, your compiled code will send the following instruction:

Or, "add the contents of A and B and store them into register C" in the following binary machine language format:

Let's say you want to subtract A from C and store it in the D register. The human-readable code would look like the following:

And, it will translate to the following machine code for the processor's microcode to process:

As you would expect, the more advanced the chip (16-bit, 32-bit, 64-bit), the more instructions and larger address spaces are supported.

The assembler I'm using in this tutorial is called NASM. The open-source NASM, or the Net-Wide Assembler, will assemble the assembly code into a file format called object code. The object file generated is an intermediate step to produce the executable binary or program. The reason for this intermediate step is that a single large source code file may end up being cut up into smaller source code files to make them more manageable in both size and complexity. For instance, when you compile the C code, you'll instruct the C compiler to produce only an object file. All object code (created from your ASM and C files) will form bits and pieces of your kernel. To finalize the compilation, you'll use a linker to take all necessary object files, combine them, and then produce the program.

The following code should be written to and saved in a file named boot.asm. You should store the file in the dedicated working directory for the project.

So, this looks like a bunch of nonsensical gibberish, right? It isn't. Again, this is supposed to be human-readable code. For instance, under the multiboot section, and in the proper order of the multiboot specification (refer to the section labeled "References" below), you're defining three double words variables. Wait, what? What is a double word? Well, let's take a step back. The assembly DD pseudo-instruction translates to Define Double (word), which on an x86 32-bit system is 4 bytes (32-bits). A DW or Define Word is 2 bytes (or 16 bits), and moving even further backward, a DB or Define Byte is 8-bits. Think of it as your integers , short and long in your high-level coding languages.

Note: pseudo-instructions are not real x86 machine instruction. They are special instructions supported by the assembler and for the assembler to help facilitate memory initialization and space reservation.

Below the multiboot section, you have a section labeled text , which is shortly followed by a function labeled start . This start function will set up the environment for your main kernel code and then execute that kernel code. It starts with a cli . The CLI command, or Clear Interrupts Flag, clears the IF flag in the EFLAGS register. The following line moves the empty stack_space function into the Stack Pointer. The Stack Pointer is small register on the microprocessor that contains the address of your program's last request from a Last-In-First-Out (LIFO) data buffer referred to as a Stack. The example assembly program will call the main function defined in your C file (see below) and then halt the CPU. If you look above, this is telling the assembler via the extern main line that the code for this function exists outside this file.

So, you wrote your boot code, and your boot code knows that there is an external main function it needs to load into, but you don't have an external main function—at least, not yet. Create a file in the same working directory, and name it kernel.c. The file's contents should be the following:

If you scroll all the way to the bottom of the C file and look inside the main function, you'll notice it does the following:

In the current x86 architecture, your video memory is running in protected mode and starts at memory address 0xB8000 . So, everything video-related will start from this address space and will support up to 25 lines with 80 ASCII characters per line. Also, the video mode in which this is running supports up to 16 colors (of which I added a few to play with at the top of the C file).

Following these video definitions, a global array is defined to map to the video memory and an index to know where you are in that video memory. For instance, the index starts at 0, and if you want to move to the first character space of the next line on the screen, you'll need to increase that index to 80, and so on.

As the names of the following two functions imply, the first clears the entire screen with an ASCII empty character, and the second writes whatever string you pass into it. Note that the expected input for the video memory buffer is 2 bytes per character. The first of the two is the character you want to output, while the second is the color. This is made more obvious in the print_string() function, where the color code is actually passed into the function.

Anyway, following those two functions is the main routine with its actions already mentioned above. Remember, this is a learning exercise, and this kernel will not do anything special other than print a few things to the screen. And aside from adding real functions, this kernel code is definitely missing some profanity. (You can add that later.)

In the real world...

Every kernel will have a main() routine (spawned by a bootloader), and within that main routine, all the proper system initialization will take place. In a real and functional kernel, the main routine eventually will drop into an infinite while() loop where all future kernel functions take place or spawn a thread accomplishing pretty much the same thing. Linux does this as well. The bootloader will call the start_kernel() routine found in init/main.c, and in turn, that routine will spawn an init thread.

As mentioned previously, the linker serves a very important purpose. It is what will take all of the random object files, put them together and provide a bootable single binary file (your kernel).

Let's set the output format to be a 32-bit x86 executable. The entry point into this binary is the start function from your assembly file, which eventually loads the main program from the C file. Further down, this essentially is telling the linker how to merge your object code and at what offset. In the linker file, you explicitly specify the address in which to load your kernel binary. In this case, it is at 1M or a 1 megabyte offset. This is where the main kernel code is expected to be, and the bootloader will find it here when it is time to load it.

The most exciting part of the effort is that you can piggyback off the very popular GRand Unified Bootloader (GRUB) to load your kernel. In order to do this, you need to create a grub.cfg file. For the moment, write the following contents into a file of that name, and save it into your current working directory. When the time comes to build your ISO image, you'll install this file into its appropriate directory path.

Build the boot.asm into an object file:

Build the kernel.c into an object file:

Link both object files and create the final executable program (that is, your kernel):

Now, you should have a compiled file in the same working directory labeled kernel :

This file is your kernel. You'll be booting into that kernel shortly.

Create a staging environment with the following directory path (from your current working directory path):

Let's double-check that the kernel is a multiboot file type (no output is expected with a return code of 0):

Now, copy the kernel into your iso/boot directory:

And, copy your grub.cfg into the iso/boot/grub directory:

Make the final ISO image pointing to your iso subdirectory in your current working directory path:

Say you want to expand on this tutorial by automating the entire process of building the final image. The best way to accomplish this is by throwing a Makefile into the project's root directory. Here's an example of what that Makefile would look like:

To build (including the final ISO image), type:

To clean all of the build objects, type:

You now have an ISO image, and if you did everything correctly, you should be able to boot into it from a CD on a physical machine or in a virtual machine (such as VirtualBox or QEMU). Start the virtual machine after configuring its profile to boot from the ISO. You'll immediately be greeted by GRUB (Figure 1).


Figure 1. The GRUB Bootloader Counting Down to Load the Kernel

After the timeout elapses, the kernel will boot.


Figure 2. The Linux Journal kernel booted. Yes, it does only this.

You did it! You wrote your very own kernel from scratch. Again, it doesn't do much of anything, but you definitely can expand upon this. Now, if you will excuse me, I need to post a message to the USENET newsgroup, comp.os.minix , about how I developed a new kernel, and that it won't be big and professional like GNU .


Petros Koutoupis, LJ Editor at Large, is currently a senior performance software engineer at Cray for its Lustre High Performance File System division. He is also the creator and maintainer of the RapidDisk Project. Petros has worked in the data storage industry for well over a decade and has helped pioneer the many technologies unleashed in the wild today.

Recent Articles

SFTP Port Forwarding: Enabling Suppressed Functionality

Related Articles


Welcome to ArjunSreedharan.org

Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Read/write files within a Linux kernel module

I know all the discussions about why one should not read/write files from kernel, instead how to use /proc or netlink to do that. I want to read/write anyway. I have also read Driving Me Nuts - Things You Never Should Do in the Kernel .

However, the problem is that 2.6.30 does not export sys_read() . Rather it's wrapped in SYSCALL_DEFINE3 . So if I use it in my module, I get the following warnings:

Obviously insmod cannot load the module because linking does not happen correctly.

red0ct's user avatar

2 Answers 2

You should be aware that you should avoid file I/O from within Linux kernel when possible. The main idea is to go "one level deeper" and call VFS level functions instead of the syscall handler directly:

Opening a file (similar to open):

Close a file (similar to close):

Reading data from a file (similar to pread):

Writing data to a file (similar to pwrite):

Syncing changes a file (similar to fsync):

[Edit] Originally, I proposed using file_fsync, which is gone in newer kernel versions. Thanks to the poor guy suggesting the change, but whose change was rejected. The edit was rejected before I could review it.

dmeister's user avatar

Since version 4.14 of Linux kernel, vfs_read and vfs_write functions are no longer exported for use in modules. Instead, functions exclusively for kernel's file access are provided:

Also, filp_open no longer accepts user-space string, so it can be used for kernel access directly (without dance with set_fs ).

Tsyvarev's user avatar

Not the answer you're looking for? Browse other questions tagged c file-io linux-kernel kernel-module or ask your own question .

Hot Network Questions

write a kernel

Your privacy

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy .

Creating a 64-bit kernel


Make sure that you have the following done before proceeding:

The Main Kernel

The kernel should run in a uniform environment. Let's make this simple for now...

Compile each source file like any piece of C code, just remember to use the cross-compiler and the proper options. Linking will be done later...

The -mcmodel=large argument enables us to run the kernel at any 64-bit virtual memory address we want. In fact, using the 'large' code model is discouraged due to its inefficiency, but it can be fine as a start. Check the SysV AMD64 ABI document for extra details.

You will need to instruct GCC not to use the the AMD64 ABI 128-byte 'red zone', which resides below the stack pointer, or your kernel will be interrupt unsafe . Check this thread on the forums for extra context.

We disable SSE floating point ops. They need special %cr0 and %cr4 setup that we're not ready for. Otherwise, several #UD and #NM exceptions will be triggered.

The kernel will be linked as an x86_64 executable, to run at a virtual higher-half address. We use a linker script:

Feel free to edit this linker script to suit your needs. Set ENTRY(...) to your entry function, and KERNEL_VMA to your base virtual address.

You can link the kernel like this:

Note : Obviously there is no bootstrap assembly yet, which is the hard part of starting out, and you can't link without it.

Before you can actually use your kernel, you need to deal with the hard job of loading it. Here are your four options:

With your own boot loader

This method is the simplest (since you write all the code), though it requires the most work.

I won't give any code, but the basic outline is:

With a 64 bit aware loader

Open Source boot loaders written for long mode kernels already exist, you don't have to reinvent the wheel. Unlike GRUB (which does not support switching to long mode), bootloaders such as Limine or BOOTBOOT can load your 64-bit kernel directly by doing all the things listed in the previous section (and more). It saves you the struggle to write and properly link bootstrap code or to implement your own boot loader entirely from scratch. Therefore using a 64-bit aware bootloader is a nice and easy, reasonably bullet-proof, choice for beginners.

With legacy GRUB

Note : The advise in this section is bit questionable in its current form. See Creating a 64-bit kernel using a separate loader

This requires the use of GRUB or another multiboot1-compliant loader. This may be the most error free of the four, but creating a multiboot-compatible kernel properly has its own set of pitfalls.

A quick rundown:

Note that this code has to be stored in a elf32 format and must contain the multiboot1-header.

Also remember to set the text section to start at 0x100000 (-Ttext 0x100000) when linking your loader.

Set up GRUB to boot your loader as a kernel in its own right, and your actual kernel as a module. Something like this in menu.lst:

With a 32-bit bootstrap in your kernel

This requires the use of any ELF64-compatible loader that loads into protected-mode (GRUB2, or patched GRUB Legacy). This may be the simplest in the long run, but is more difficult to set up. Note that GRUB2, which implements Multiboot 2 , does not support switching into long mode.

First, create an assembly file like the following, which will set up virtual addressing and long mode:


Then, add the following to your original linker file:

The above edits allow the linker to link the bootstrap code with physical addressing, as virtual addressing is set up by the bootstrap. Note that in this case, KERNEL_VMA will be equivalent to 0x0, meaning that text would have a virtual address at KERNEL_LMA + KERNEL_VMA instead of just at KERNEL_VMA. Change '+=' to '=' and your bootstrap code if you do not want this behaviour.

Compile and link as usual, just remember to compile the bootstrap code as well!

Set up GRUB2 to boot your kernel (depends on your bootloader) with grub.cfg:

With Visual C++

The technique for creating a 64 bit kernel with a 32 bit bootstrap is similar to GCC. You need to create an assembly bootstrap with nasm (masm may work, but the author uses nasm). Note that this stub must be assembled to a 64 bit object file (-f win64). Your stub then has a BITS 32 directive. Note that, although nasm will not complain about this, Microsoft link will. It complains about address relocations, due to the memory model settings (/LARGEADDRESSAWARE, which is required for /DRIVER). As such, you need a method of generating the correct 32 bit code, while fooling link into generating a 64 bit relocation. Here is a macro for you:

Possible Problems

You may experience some problems. Fix them immediately or risk spending a lot of time debugging later...

My kernel is way too big!

Try each of the following, in order:

Kernel Virtual Memory

(This section is based on notes by Travis Geiselbrecht (geist) at the osdev IRC channel)

Long mode provides essentially an infinite amount of address space. An interesting design decision is how to map and use the kernel address space. Linux approaches the problem by permanently mapping the -2GB virtual region 0xffffffff80000000 -> 0xffffffffffffffff to physical address 0x0 upwards. Kernel data structures, which are usually allocated by kmalloc() and the slab allocator, reside above the 0xffffffff80000000 virtual base and are allocate from the physical 0 -> 2GB zone. This necessitates the ability of 'zoning' the page allocator, asking the page allocator to returning a page frame from a specific region, and only from that region. If a physical address above 2GB needs to accessed, the kernel temporarily map it to its space in a temporary mapping space below the virtual addresses base. The Linux approach provides the advantage of not having to modify the page tables much which means less TLB shootdowns on an SMP system.

Another approach is to treat the kernel address space as any other address space and dynamically map its regions. This provides the advantage of simplifying the page allocator by avoiding the need of physical memory 'zones': all physical RAM is available for any part of the kernel. An example of this approach is mapping the kernel to 0xfffffff800000000 as usual. Below that virtual address you put a large mapping for the entire physical address space, and use the virtual 0xfffffff800000000 -> 0xffffffffffffffff region above kernel memory area as a temporary mappings space.

Forum Threads

Personal tools

Powered by MediaWiki

DVT Software Engineering

DVT Software Engineering

Ruan de Bruyn

Jan 29, 2021

How to write your first Linux Kernel Module

The Linux Kernel is perhaps the most ubiquitous (and arguably still underappreciated) piece of software around today. It forms the basis of all Linux distributions (obviously), but that’s not all. It’s also running on lots of embedded hardware pretty much everywhere . Got a microwave? It’s probably running the Linux Kernel. Dishwasher? That too. Got enough money for a Tesla vehicle? Maybe you can fix a few bugs you find, and submit a patch to their Model S and Model X code on Github. Circuitry that keeps the International Space Station from crashing into the Earth in a fiery mass of death and destruction? Of course . The kernel is lightweight. Just means it plays nicely with low gravity.

The Linux kernel goes through a development cycle that is — quite frankly — insane. Some statistics from the Kernel 5.10 patch show that this release saw 252 new authors making commits into the repo (which is also the lowest amount of new contributors since 5.6), and new releases are coming out every 9 weeks. All in all, the kernel forms the solid bedrock of a large part of the computing world, but it’s not archaic by any means. All good and well, but what if you want to poke around inside it, and maybe write some code yourself? It can be a little daunting, as it’s an area of programming that most schools and boot camps don’t touch on. Plus, unlike with every flavour-of-the-month JavaScript framework that comes crawling out of the woodwork whenever you blink your eyes, you can’t go onto StackOverflow and find an odd billion or so posts to guide you through any issues.

So here we are then. Are you interested in writing a hello world project for the most persistent open source project out there? Partial, perhaps, to take a small dose of Operating Systems theory? Amenable to coding in a language that was created in the ’70s, and gives you a profound sense of accomplishment when you do literally anything at all and it works? Great, because I honestly can’t think of a better way to spend your time otherwise.

Heads up: in this article, I assume that you have a working knowledge of how to set up a Virtual Machine with Ubuntu. There are already tons of resources out there on how to do this, so fire up your favourite VM manager and get it done. I also assume that you’re a little familiar with C, as that is the language the kernel is written in. Since this is just a hello world module, we won’t be doing very complex coding at all, but I won’t be introducing any concepts from the language. At any rate, the code should be basic enough to be self-explanatory. With all that said, let’s get to it.

Writing the base module

Firstly, let’s just define what a kernel module is. A typical module is also called a driver and is kind of like an API, but between hardware and software. See, in most operating systems, you have two spaces where things happen. Kernel space, and userspace. Linux certainly works this way, and Windows does too. Userspace is where user-related stuff goes on, like you listening to a song on Spotify. Kernel space is where all of the low level, inner workings of the OS are. If you’re listening to a song on Spotify, a connection must have been created to their servers, and something on your computer is listening for network packets, retrieving the data inside of them, and eventually passing this on to your speakers or headphones so you can hear the sound. This is what happens in the kernel space. One of the drivers at work here is the software that allows the packets coming through your network port to be translated to music. The driver itself would have an API-like interface that allows user-space applications (or maybe even other kernel-space applications) to call its functions and retrieve those packets.

Luckily, our module won’t be anything like this, so don’t be daunted. It won’t even interact with any hardware. Many modules are entirely software-based. A good example of this is the process scheduler in the kernel, which dictates which cores of your CPU are working on which running process at any given time. A module that purely works with software is also the best place to start getting your hands dirty. Startup your VM, open up the terminal with Ctrl+Alt+T and do the ol’

sudo apt update && sudo apt upgrade

to make sure your software is up to date. Next, let’s get the new software packages we’ll need for this endeavour. Run

sudo apt install gcc make build-essential libncurses-dev exuberant-ctags

With that, we can finally start coding. We’ll start it off easy, and just put the following code in a source file. I put mine in Documents and named it dvt-driver.c

Note that we don’t need all the includes right this second, but we’ll use all of them soon enough. Next, we need to compile it. Create a new file called Makefile alongside the source code, and put the following contents in it:

Open the terminal in the directory of your two files, and run make . At this point, you should see some console output of your module compiling, and this whole process should spit out a file named dvt-driver.ko . This is your fully functional, compiled kernel module. Let’s load this ground-breaking piece of intellectual property into the kernel, shall we? It’s not doing us any good sitting here by itself. In the same directory as your code, run

sudo insmod dvt-driver.ko

and your driver should be inserted into the kernel. You can verify this by running lsmod , which lists all of the modules currently in the kernel. Among them, you should see dvt_driver . Note that the kernel replaces dashes in your module’s filename with underscores when it loads it. If you want to remove it, you can run

sudo rmmod dvt_driver

In the source code, we also do some logging to let it be known our driver loaded okay, so run dmesg from the terminal. This command is a shortcut for printing the kernel’s logs to the screen, and prettifying it a bit so it’s more readable. The most recent lines of output from dmesg should be the messages from the driver, saying the hello world driver has been loaded, and so on. Note that there is sometimes a lag in seeing init and exit function messages from drivers, but if you insert and remove the module twice, you should see all these messages being logged. If you want to see these messages get logged live-action, you can open up a second terminal, and do dmesg --follow . Then, as you insert and remove your driver from the other terminal, you’ll see the messages popping up.

So let’s examine what we have so far. In the source code, we start with some module metadata. You can get away with not specifying the author and so on, but you might as well put your name in there. The compiler would also give you a stern warning if you don’t include a license code, and my pathological desire for approval or acceptance in my life from virtually anything capable of providing it dictates that I need to specify said license code. If you are not marred by such psychological afflictions, it’s probably also good to note that the kernel maintainers are quite wary of taking in code that is not open source, and they pay good attention to details like licenses. In the past, big companies have been denied the right to put proprietary kernel modules in the source code . Don’t be like those guys. Be good. Be open-source. Use open-source licenses.

Next, we make custom init and exit functions. Whenever a module is loaded into the kernel, its init function is run, and conversely, the exit function is run when it’s removed. Our functions aren’t doing much, just logging text to the kernel logs. The printk() function is the kernel’s version of the classic print function from C. Obviously, the kernel does not have some terminal or screen available with which to print random things to, so the printk() function prints to the kernel logs. You have the KERN_INFO macro for logging general stuff. You can also use macros like KERN_ERROR in case an error occurs, which will alter the output formatting in dmesg . At any rate, the two functions for init and exit are registered in the last two lines of the source code. You have to do this; your driver has no other way of knowing which functions to run. You can also name them whatever you want, so long as their signature (arguments and return type) are the same as the ones I used.

Lastly, there’s the Makefile. Many open-source projects use the GNU Make utility for compiling libraries. This is typically used for libraries coded in C/C++ and is just a way of automating compiling your code. The Makefile listed here is the standard way of compiling your module. The first line appends your to-be-compiled .o file to the obj-m variable. The kernel is also compiled this way and appends a lot of .o files to this variable before compiling. In the next line, we employ some sleight of hand. See, the rules and commands for building kernel modules are already defined in the Makefile that ships with the kernel. We don’t have to write our own, we can use the kernel’s rules instead…which is exactly what we’re doing. In the -C argument, we point to the root directory of our kernel sources. Then we tell it to target our project’s working directory and compile the modules. Voilà. GNU Make is a deceptively powerful compiling tool, that can be used for automating the compilation of any kind of project, not just C/C++ projects. If you want to read up on it, you can look at this book , absolutely for free (as in beer, and as in speech).

The /proc entry

Let’s get to the meat of this post. Logging messages in the kernel is all good and well, but this is not the stuff that great modules are made of. Earlier in the article, I mentioned that kernel modules typically act as APIs for user space programs. Right now, our driver doesn’t do anything like that. Linux has a very neat way of handling this interaction; it works with an “everything is a file” abstraction.

To demonstrate, open up another terminal, and do cd /proc . Running ls , you should see a bunch of files listed. Now, run cat modules , and you’ll see some text printed to the screen. Does that look familiar? It should; all of the modules presented in the lsmod command you ran earlier are present here as well. Let’s try cat meminfo . Now we have info from the memory usage of the virtual machine. Cool. One last command to try: do ls -sh . This lists the size of each file alongside its name, and…wait, what? What is this madness?

Their sizes are all 0 bytes. Nothing. And even though not a single bit is expended for these files, we just read their contents…? Well, that’s right, actually. See, /proc is the process directory, and is sort of a central place for userspace applications to get information from (and sometimes control) kernel modules. Ubuntu’s version of Task Manager is System Monitor, which you can run by tapping the OS key on your keyboard, and typing “system”, at which point a shortcut to System Monitor should be visible. System Monitor shows stats like which processes are running, CPU usage, memory usage, etc. And it gets all this information by reading the special files in /proc , like meminfo .

Let’s add the functionality to our driver so we can have our own entry in /proc . We will make it so that when a userspace application reads from it, it will greet us with a hello world message. Replace all the code under our module metadata with the following:

Now, remove the driver from the kernel, recompile, and insert the new .ko module into the kernel. Run cat /proc/helloworlddriver , and you should see our driver returning the hello world greeting to the terminal. Very neat, if you ask me. But alas, the cat command is maybe too easy to really drive the point home of what we’re doing here, so let’s write our own user space application to interact with this driver. Put the following Python code in a script in any directory (I called mine hello.py ):

This code should be self-explanatory, and as you can see, this is exactly how you would do file I/O in any programming language. The /proc/helloworlddriver file is our API to the kernel module we just made. If you run python3 hello.py , you should see it printing our greeting to the terminal. Cool stuff.

In our code, we made a custom read function. As you might guess, you can override the write function as well, if your module requires some userspace input. For instance, if you had a driver that controls the speed of the fans in your PC, you could give it a write function where you write a percentage number between 0 and 100 to the file, and your driver manually adjusts the fan speed accordingly. If you’d like to know how this function overriding actually works, read the next section. If you’re not interested, just skip on ahead to the end of the article.

Bonus Section — How Does This Even Work?

In this section, I figured some of you might be curious as to how overriding read/write functions for a /proc entry actually works. To know this, we need to delve into some OS theory, and we’ll use Assembly as an analogy.

In Assembly, your program has a “stack” for keeping track of variables you make during execution. It is a little different from a canonical Computer Science stack though since you can push and pop to it, but you can also access arbitrary elements in the stack — not just the element on top — and change/read them. Alright, so let’s say you define a function with two arguments in your Assembly code. You don’t just pass these variables when calling the function, no sir. Passing variables to functions in brackets is for amateurs copying code for a Python chatbot from an online tutorial. Assembly programmers are kind of a big deal folks, putting Apollo 11 on the moon with their craft . We’re talking no pain, no gain. Before you call your two-argument function, you have to push your arguments on the stack. Then you call your function, which usually reads the arguments from the top of the stack backwards, and uses them however it needs to. Plenty of pain to be had here, actually, since it’s all too easy to push your arguments onto the stack in the wrong order, and then your function reads the arguments as gibberish.

I mention this since your OS has very similar ways of executing code as well. It has its own stack, keeping track of variables, and when the kernel calls an OS function, that function looks for arguments on the top of the stack and then executes. If you want to read a file from disk, you call the read function with a few arguments, these arguments get put on the kernel’s stack, and the read function is then called to read the file (or parts of it) from disk. The kernel keeps track of all its functions in a huge table, where entries in the table list the function name and the address in memory where the function is stored. This is where our own custom functions come in. See, even though our module interactions happen via files, there’s no hard and fast rule that says when we read from that file, that the actual read function is called. The read function is just an address in memory looked up in a table. We can override what function in memory we’re calling when a userspace program reads our module’s /proc entry, and that’s precisely what we’re doing! In the file_operations struct, we assign the .read attribute to our custom_read function and then register the /proc entry with it. When we call the read function from our Python user space application, it might look like you’re reading a file from disk, and you’re passing all the right arguments on the kernel’s stack, but at the last moment, our custom_read function is being called instead, via its own address in memory that we made the kernel aware of. This works, because our custom_read function takes in the exact same arguments as reading a file from disk, so the correct arguments are being read from the kernel’s stack in the correct order.

The thing we have to keep in mind here is that userspace applications will treat our /proc entry as if it’s a file on disk, and will read and write to it as such. The onus falls on us to make sure that this interaction holds. Our module has to behave just like a regular file on disk, even though it’s not. When most programming languages read a file, they usually do so in chunks. Let’s say the chunks are 1024 bytes at a time. You would read the first 1024 bytes from a file into a buffer, which would contain bytes 0–1023 after it’s done. The read operations return 1024, to tell you that 1024 bytes were read successfully. Then the next 1024 bytes are read, and the buffer contains bytes 1024–2047. Eventually, we’ll reach the end of our file. Maybe the last chunk will ask for 1024 bytes, but there are only 800 left. So the read function returns 800 and puts those last 800 bytes in the buffer. Finally, the read function will ask for yet another chunk, but our file’s contents have been read fully. Then the read function will return 0. When this happens, your programming language knows that it’s reached the end of the file, and will stop trying to read from it.

Looking at the arguments of our own custom_read function, you can likely see the arguments that make this happen. The file struct represents the file that our userspace application is reading from (though this specific struct is actually a kernel only thing, but that’s not important for this article). Our last arguments are the buffer, count, and offset. The buffer is our user-space buffer, and basically contains the memory address of the array that we’re writing bytes into. The count is our chunk size. The offset is the point in the file that we’re reading a chunk from, as you’ve probably surmised. Let’s look at what we expect to happen when we’re reading from our module. We’re only returning “Hello world!” to the userspace. Including the newline at the end of the string, this is 13 characters, which will comfortably fit into pretty much any chunk size. When we’re trying to read from our /proc entry, it will go like so: we read our first chunk, write the greeting into the buffer, and return 13 (length of our greeting string) to the user space application since 13 bytes were read. Then the second chunk will read from an offset of 13, which is the “end” of our file (we have nothing left to send back, after all), so we return 0. The logic in our custom_read function reflects this. If the offset passed to it is greater than 0, it means we’ve already given our greeting, so we just return 0 and call it a day. Otherwise, we copy our greeting string to the user-space buffer and update our offset accordingly.

Other types of functions, like overriding a write function, should follow the same principles. Your function can do anything, just so long as it acts like a file to any userspace applications doing read/write operations on it.

Thank you for reading this post and I hope you found it interesting enough to start poking around the kernel on your own. Though we used a VM in this article, knowing how to write kernel modules is a must if you’re ever writing code for embedded systems (like IoT devices). If this is the case, or you want to learn more about kernel development, check out the KernelNewbies website, in particular this tutorial . There are many books available as well but look for a fairly recent publishing date before buying it. At any rate, you probably just wrote your first Linux kernel module ever, so be proud, and happy coding!

More from DVT Software Engineering

Making an impact in Software Engineering

About Help Terms Privacy

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store

Ruan de Bruyn

Text to speech

Create your address on the web.

Move your domain name to IONOS.

Secure site traffic and build trust.

Protect your domain from threats.

Create your own website easily.

Our experts build your website.

Create your own online store.

Fast, scalable hosting for any website.

Deploy your site, app, or PHP project from GitHub.

Optimized for speed, reliablity and control.

Powerful Exchange email and Microsoft's trusted productivity suite.

Collaborate smarter with Google's cloud-powered tools.

Secure and share your data on the go.

Protect your data from viruses, ransomware, and loss.

Reach out with your own email address.

Safeguard your emails against loss.

Pay as you go with your own scalable private server.

Your fully virtualized private server.

Get enterprise hardware with unlimited traffic

Individually configurable, highly scalable IaaS cloud

What is a kernel?

Kernels - at the heart of the operating system

Anyone who uses technologies with an operating system is working with a kernel, though often without realizing it. The kernel organizes processes and data in every computer. It serves as the core of an operating system and the interface between software and hardware . This means that the kernel is in constant use and is a key component of an operating system.

The kernel not only serves as the core of the system but is also a program that controls all processor and memory access. It is responsible for the most important drivers and has direct access to the hardware. It’s the basis for interactions between hardware and software and manages their resources as efficiently as possible .

Structure of a kernel

What is a kernel in a computer program, what are the kernel’s tasks, how does a kernel work, the kernel in the operating system, what is an open source kernel, the three types of kernels.

$1 Domain Names

Register great TLDs for less than $1 for the first year.

Why wait? Grab your favorite domain name today!

write a kernel

What is a kernel?

The kernel is the heart of the operating system and controls all the important functions of hardware – this is the case for Linux , macOS and Windows, smartphones, servers, and virtualizations like KVM as well as every other type of computer.

A kernel is always built the same way and consists of several layers :

A kernel is central to all layers, from system hardware to application software. Its work ends where user access begins: at the Graphical User Interface (GUI) . The kernel thus borders on the shell (that is, the user interface ). You can picture the kernel as a seed or pit and the shell as the fruit that surrounds the pit.

Think of the kernel in this context like a colonel: They both pass along commands. A program sends “ system calls ” to the kernel, for example when a file is written. The kernel, well-versed in the instruction set of the CPU , then translates the system call into machine language and forwards it to the CPU. All of this usually happens in the background, without the user noticing.

The main task of the kernel is to multitask . This requires keeping up with time constraints and remaining open to other applications and expansions.

For every rule there are exceptions in such a lean, well-functioning system as an operating system. That’s why the kernel only serves as a go-between when it comes to system software, libraries, and application software. In Linux, the graphic interface is independent from the kernel.

In multi-user systems, the kernel also monitors access rights to files and hardware components. The Task Manager shows what those are at any given time. If a process is finished by the user, the Task Manager gives the kernel instructions for stopping the process and freeing the memory that was used for it.

When a computer powers up, the kernel is the first thing that’s loaded into the RAM. This happens in a protected area, the bootloader , so that the kernel can’t be changed or deleted.

Afterwards, the kernel initializes the connected devices and starts the first processes. System services are loaded, other processes are started or stopped, and user programs and memory allocation are initiated.

This question is best answered by countering: What is a kernel not? The kernel is not the core of a processor, it’s the core of the operating system . A kernel is also not an API or framework.

Multikernel operating systems can use various cores of a multicore processor like a network of independent CPUs. How does that work? It comes down to the special structure of the kernel, which is composed of a series of different components:

From these components follow the four functions of the kernel :

When implemented properly, the functions of the kernel are invisible to users. The kernel works in its own setting, the kernel space . On the other hand, files, programs, games, browsers, and everything that the user sees are located in the user space. Interaction between these two use the system call interface ( SCI ).

To understand the function of the kernel in the operating system, imagine the computer as divided into three levels:

There are two modes for the code in a system: kernel mode and user mode . The code in kernel mode has unlimited access to the hardware, whereas in user mode access is limited to the SCI. If there’s an error in user mode, not much happens. The kernel will intervene and repair any potential damage. On the other hand, a kernel crash can cause the entire system to crash. This is, however, unlikely due to the security measures in place.

What kind of kernels exist?

One type of kernel previously described is the multitasking kernel that describes several processes running simultaneously on one kernel. If you add access management to it, you’ll have a multiuser system, on which several users can work at the same time. The kernel is responsible for authentication, as it can allot or separate called processes.

Linux maintains a comprehensive archive on its kernel. Apple has published the kernel types for all of its operating systems for open source access. Microsoft also uses a Linux kernel for the Windows subsystem for Linux .

It’s easy to lose track of the different kernel types. Linux systems and Android devices use a Linux kernel. Windows uses the NT kernel, which various subsystems draw on. Apple uses the XNU kernel.

There are various types of kernels that are used across different operating systems and end devices. They can be sorted into three groups:

What is a hypervisor?

Virtualizing is becoming more and more important: Its basic concept is that one imposes a virtual, abstract system onto an actual real system. Both software and hardware can be depicted in this way. In order to create a connection between the actual and the virtual system, one requires an additional layer – the hypervisor.


What is csrss.exe? If you inspect the processes currently running on your Windows system, you’ll inevitably come across the csrss.exe process. Your system can’t run without it because it’s responsible for important tasks such as launching processes. If you suspect that the file might be a virus, it’s a good idea to inspect whether it’s the genuine csrss.exe file. But can you spot the difference?

Shingled Magnetic Recording (SMR)

When you save data to a hard drive, it is written to tracks on magnetic platters. On conventional hard drives these tracks are completely separate, but with Shingled Magnetic Recording, they partly overlap one another. This means that SMR allows you to store more data per inch. Read on to find out more about this new hard drive technology and its implications.

WSL2 explained

WSL2, which was released in early 2020, is the development of a Windows Subsystem for Linux. Instead of a compatibility layer, WSL2 relies on the virtualization of a complete Linux kernel. This means that even complex applications such as Docker can be operated in the Linux environment. We’ll explain the similarities and differences of this to the previous version and show how to use WSL2.

Related Products

Web hosting for agencies.

Provide powerful and reliable service to your clients with a web hosting package from IONOS.

A high profit can be made with domain trading! We show you what aspects to consider when trying your hand at this...

An easy step-by-step guide to getting your dream address...

We show you how exactly to connect your custom email domain with iCloud...

Create your personal email address with your own email domain to demonstrate professionalism and credibility...

what does .io mean and why is the top-level domain so popular among IT companies and tech start-ups...

Wait! We’ve got something for you! Have a look at our great prices for different domain extensions.

write a kernel

C# Corner

C, C++, MFC

Create Your Own Kernel

write a kernel

In this article, we will create a simple kernel in C from scratch.

write a kernel

write a kernel

write a kernel

write a kernel

write a kernel

C# Corner Ebook

Printing in C# Made Easy

Featured articles.


write a kernel

write a kernel

write a kernel

Writing a 16-bit dummy kernel in C/C++

write a kernel


In my previous articles I was only briefing about on how to write a boot loader. That was fun and challenging. I enjoyed it a lot. But after learning how to write a boot-loader I wanted to write much better stuff like embedding more functionality into it. But then the size of the loader kept on increasing more than 512 bytes and obviously I kept on seeing the error "This is not a bootable disk" each time I reboot my system with boot disk.

What is the scope of the article ?

In this article I will try to brief about the importance of file system for our boot-loader and also try to write a dummy kernel which does nothing but display a prompt for the user to type in text. Why am I embedding my boot-loader into a FAT formatted floppy disk and how is it beneficiary to me. As one article is too small to mention about file systems I will do my best to put it as short and simple as possible.

Also you can check out my previous articles to get a basic idea about boot-loader and how to write one in Assembly and C.

Here are the links.

How are the contents organized.

Here is the break down of the topics for this article.

Boot-loader limitation(s)

FAT File System

Fat workflow, development environment, writing a fat boot-loader, mini-project - writing a 16-bit kernel, testing the kernel.

In the previous articles I tried to write boot-loader and after printing colored rectangles on screen I wanted to embed more functionality into it. But, the size of 512 bytes was a great limitation for me not to update the code of boot-loader to do more...

The challenges are as below

How am I going to deal with the above?

Let me brief you as below.

In our boot-loader, all we can do is to load the second sector(kernel.bin) of the bootable drive into the RAM memory at address say 0x1000 and then jump to the location 0x1000 from 0x7c00 to start executing the kernel.bin file.

Below is the picture you can refer to get an idea.

Image 1

Invoking other files on the disk from bootloader

Earlier, we came to know that we can actually pass the control from bootloader(0x7c00) to other location where the disk files like kernel.bin are located in memory and then proceed further. But I have few queries in my mind.

Do you know how many sectors the kernel.bin file will occupy on the disk?

I think that's easy. All we have to do is to do is the following 1 sector = 512 bytes So, if the size of the kernel.bin is say 512 bytes then it will occupy 1 sector and then if the size is 1024 bytes then 2 sectors and so on... Now, the point is based on the size of the kernel.bin file you will have to hard code the number of sectors to read for the kernel.bin file in the boot-loader. This means that in future say, you want to upgrade the kernel by frequently updating the kernel, you also have to remember to make a note of the number of sectors the kernel.bin file will occupy in the bootloader or else the kernel crashes.

What do you think if you want to add more files like office.bin, entertainment.bin, drivers.bin apart from kernel.bin to your bootable drive?

How do you know if the files you are appending after the boot-sector one by one are the ones you desired, what is missing, what happens if by mistake i have copied a wrong file to second sector of the boot-disk and then updated the boot-loader and then ran.

This eliminates a few problems

Earlier boot-loader blindly used to load the sectors that are hard coded in it.

Why should you load the file if you do not know if the file you are loading is correct or not?

What is the solution?

All we have to do is to organize the information on the disk that we have listed above and then start organizing the data and then reprogram our boot-loader so that it can be really efficient in loading files.

This way of organizing data in a large scale is known as File System. There are many types of file systems out there both commercial and free. I will list a few of them as below.

Before I introduce you to file system, there are few terminologies that you need to know about.

In FAT file system, a cluster occupies 1 sector and a sector occupies 512 bytes on storage media. So, 1 cluster is equivalent to 1 sector on a FAT formatted disk drive.

The cluster and sector are the smallest units on a FAT file system.

For ease of use, the FAT file system is divided into four major portions and they are listed as below.

I tried my best to show it to you in the form of a picture for better understanding.

Image 2

Let me brief you about each part now.

Boot Sector:

A boot sector on a FAT formatted disk is embedded with a some information related to FAT so that each time the disk is inserted into a system, its file system is automatically known by the Operating System.

The operating system reads the boot sector of the FAT formatted disk and then parses the required information and then recognizes the type of file system and then starts reading the contents accordingly.

The information about FAT file system that is embedded inside a boot sector is called as Boot Parameter Block.

Boot Parameter Block:

Let me present you the values inside a boot parameter block with respect to boot sector.

Image 3

File Allocation Table:

This table acts like a linked list containing the next cluster value of a file.

The cluster value obtained from FAT for a particular file is useful in two ways.

I have mentioned in the picture about FAT table 1 & 2. All you need to remember is that one table is a copy of another. In case data from one is lost or corrupted, the data from the other table can act as a backup. This was the pure intention of introducing two tables rather than one.

Root Directory:

The root directory acts like an index to a list of all file names present on the disk. So the bootloader should search for the file name in the root directory and if it is positive then it can find the first cluster in the root directory and then load the data accordingly.

Now after finding the first cluster from the root directory, the bootloader should use the FAT table to find the next clusters in order to check for the end of file.

This is the area which actually contains the data of the file(s).

Once the proper sector of the file is identified by the program, the data of the file can be extracted from the data area.

Suppose, say our boot-loader should load kernel.bin file into the memory and then execute it. Now, in this scenario all we have to do is to code the below functionality into our boot-loader.

Compare the first 11 bytes of data with "kernel.bin" starting at offset 0 in root directory table.

If the string matches then extract the first cluster of the "kernel.bin" file at offset 26 in root directory table.

Now you have the starting cluster of the "kernel.bin" file.

All you have to do is to convert the cluster into the respective sector and then load the data into memory.

Now after finding the first sector of the "kernel.bin" file, load into memory and then look up in File Allocation Table for next cluster of the file to check if the file still has data or end of file is reached.

Below is the diagram for your reference.

Image 4

To successfully achieve this task, we need to know about the below. Please refer to my previous articles for more information about them.

Below is the code snippet used to execute a kernel.bin file on a FAT formatted disk.

Here is the bootloader

File Name: stage0.S

This is the main loader file does the following.

File Name: macros.S

This is a file which contains all the predefined macros and macro functions.


Usage: initEnvironment


Usage: writeString <String Variable>


Usage: readSector <sector number>, <target address>, <target address offset>, <total sectors to read>

Usage: findFile <target File name>


Usage: clusterToLinearBlockAddress <cluster ID>

Usage: loadFile <target file name>


Usage: initKernel

File Name: routines.S


Usage: call _initEnvironment


This function is used to display a null terminated string onto the screen. the arguments passed to it is a null terminated string variable. usage: pushw <string variable> call _writestring addw $0x02, %sp, this macro is used to read a given sector from the disk and then load it to a target address the number of arguments required to pass are 4. usage:, pushw <sectorno > pushw <address > pushw <offset > pushw <totalsectors> call _readsector addw $0x0008, %sp, this function is used to check for the existence of a file. the number of arguments required to pass is 1 usage: pushw <target file variable> call _findfile addw $0x02, %sp, this macro is used to convert the given cluster id into a sector number. the number of arguments required to pass is 1 usage: pushw <cluster id> call _clustertolinearblockaddress addw $0x02, %sp, this macro is used to load the target file into memory and then pass the execution control to it. the number of arguments required to pass is 1 usage: pushw <target file> call _loadfile addw $0x02, %sp.

File Name: stage0.ld

This file is used to link the stage0.object file during the link time.

File Name: bochsrc.txt

This is the configuration file required to run the bochs emulator which is used to serve for testing purposes.

The below file is the source code of the dummy kernel that is being introduced as part of the testing process. All we have to do is to compile the source utilizing the make file and see if it gets loaded by the bootloader or not. A splash screen with a dragon image is displayed in text and then a welcome screen followed by a command prompt is displayed for the user to type in anything. There are no commands or utilities written in there to execute but just for our testing purpose this kernel is introduced which is worth nothing as of now.

File Name: kernel.c

C++ /* ******************************************************************************** * * * * * name : kernel.c * * date : 23-feb-2014 * * version : 0.0.1 * * source : c * * author : ashakiran bhatter * * * * description: this is the file that the stage0.bin loads and passes the * * control of execution to it. the main functionality of this * * program is to display a very simple splash screen and a * * command prompt so that the user can type commands * * caution : it does not recognize any commands as they are not programmed * * * *********************************************************************************/ /* generate 16 bit code */ __asm __( " .code16\n" ) ; /* jump to main function or program code */ __asm__( " jmpl $0x1000, $main\n" ) ; #define true 0x01 #define false 0x00 char str[] = " $> " ; /* this function is used to set-up the */ /* registers and stack as required */ /* parameter(s): none */ void initenvironment() { __asm__ __volatile__( " cli;" " movw $0x0000, %ax;" " movw %ax, %ss;" " movw $0xffff, %sp;" " cld;" ) ; __asm__ __volatile__( " movw $0x1000, %ax;" " movw %ax, %ds;" " movw %ax, %es;" " movw %ax, %fs;" " movw %ax, %gs;" ) ; } /* vga functions */ /* this function is used to set the */ /* the vga mode to 80*24 */ void setresolution() { __asm __ __volatile__( " int $0x10" : : " a" (0x0003) ) ; } /* this function is used to clear the */ /* screen buffer by splitting spaces */ void clearscreen() { __asm __ __volatile__ ( " int $0x10" : : " a" (0x0200), " b" (0x0000), " d" (0x0000) ) ; __asm__ __volatile__ ( " int $0x10" : : " a" (0x0920), " b" (0x0007), " c" (0x2000) ) ; } /* this function is used to set the */ /* cursor position at a given column */ /* and row */ void setcursor( short col, short row) { __asm __ __volatile__ ( " int $0x10" : : " a" (0x0200), " d" ((row <<= 8) | col) ) ; } /* this function is used enable and */ /* disable the cursor */ void showcursor( short choice) { if (choice == false) { __asm __ __volatile__( " int $0x10" : : " a" (0x0100), " c" (0x3200) ) ; } else { __asm __ __volatile__( " int $0x10" : : " a" (0x0100), " c" (0x0007) ) ; } } /* this function is used to initialize*/ /* the vga to 80 * 25 mode and then */ /* clear the screen and set the cursor*/ /* position to (0,0) */ void initvga() { setresolution(); clearscreen(); setcursor( 0 , 0 ); } /* io functions */ /* this function is used to get a chara*/ /* cter from keyboard with no echo */ void getch() { __asm __ __volatile__ ( " xorw %ax, %ax\n" " int $0x16\n" ) ; } /* this function is same as getch() */ /* but it returns the scan code and */ /* ascii value of the key hit on the */ /* keyboard */ short getchar() { short word; __asm __ __volatile__( " int $0x16" : : " a" (0x1000) ) ; __asm__ __volatile__( " movw %%ax, %0" : " =r" (word) ) ; return word ; } /* this function is used to display the*/ /* key on the screen */ void putchar( short ch) { __asm __ __volatile__( " int $0x10" : : " a" (0x0e00 | (char)ch) ) ; } /* this function is used to print the */ /* null terminated string on the screen*/ void printstring( const char * pstr) { while (*pstr) { __asm __ __volatile__ ( " int $0x10" : : " a" (0x0e00 | *pstr), " b" (0x0002) ) ; ++pstr ; } } /* this function is used to sleep for */ /* a given number of seconds */ void delay( int seconds) { __asm __ __volatile__( " int $0x15" : : " a" (0x8600), " c" (0x000f * seconds), " d" (0x4240 * seconds) ) ; } /* string functions */ /* this function isused to calculate */ /* length of the string and then return*/ /* it */ int strlength( const char * pstr) { int i = 0 ; while (*pstr) { ++i; } return i; } /* ui functions */ /* this function is used to display the */ /* logo */ void splashscreen( const char * pstr) { showcursor(false); clearscreen(); setcursor( 0 , 9 ); printstring(pstr); delay( 10 ); } /* shell */ /* this function is used to display a */ /* dummy command prompt onto the screen */ /* and it automatically scrolls down if */ /* the user hits return key */ void shell() { clearscreen(); showcursor(true); while (true) { printstring(str); short byte; while ((byte = getchar())) { if ((byte > > 8 ) == 0x1c ) { putchar( 10 ); putchar( 13 ); break ; } else { putchar(byte); } } } } /* this is the main entry for the kernel*/ void main() { const char msgpicture[] = " .. \n\r" " ++` \n\r" " :ho. `.-/++/. \n\r" " `/hh+. ``:sds: \n\r" " `-odds/-` .mnd/` \n\r" " `.+ydmdyo/:--/ymmmmd/ \n\r" " `:+hmmmnnnmmmddnmmh:` \n\r" " `-:/+++/:-:ohmnmmmmmmmmmmmm+-+mmnd` \n\r" " `-+oo+osdmmmnmmmmmmmmmmmmmmmmmmnmnmmm/` \n\r" " ``` .+mmmmmmmmmmmmmmmmmmmmmmmmmmmmmnmho:.` \n\r" " `ommmmmmmmmmmmmmmmmmmnmdydmmdnmmmmmmmmdo+- \n\r" " .:oymmmmmmmmmmmmmmndo/hmmd+ds-:h/-ymdydmndndnn+ \n\r" " -oosdmmmmmmmmmmmmmmd:` `ymm+.+h+.- /y `/m.:mmmn \n\r" " -:` dmmmmmmmmmmmmmd. `mmno..+y/` . . -/.s \n\r" " ` -mmmmmmmmmmmmmm- -mmmmo-./s/.` ` \n\r" " `+mmmmmmmmmmmmmm- .smmy:.``-+oo+//:-.` \n\r" " .ynmmmmmmmmmmmmmmd. .+dmh+:. `-::/+:. \n\r" " y+-mmmmmmmmmmmmmmmm/` ./o+-` . \n\r" " :- :mmmmmmmmmmmmmmmmmy/.` \n\r" " ` `hmmmmmmmmmmmmmmmmmmnds/.` \n\r" " snhnmmmmmmmmmmmmmmmmmmmmnh+. \n\r" " -d. :mmmmmmmmmmmmmmmmmmmmmmmnh:` \n\r" " /. .hmmmmmmmmmmmmmmmmmmmmmmmmh. \n\r" " . `smmmmmmmmmmmmmmmmmmmmmmmmn. \n\r" " hmmmmmmmmmmmmmmmmmmmmmmmmy \n\r" " +mmmmmmmmmmmmmmmmmmmmmmmmh " ; const char msgwelcome[] = " *******************************************************\n\r" " * *\n\r" " * welcome to kirux operating system *\n\r" " * *\n\r" " *******************************************************\n\r" " * *\n\r" " * *\n\r" " * author : ashakiran bhatter *\n\r" " * version: 0.0.1 *\n\r" " * date : 01-mar-2014 *\n\r" " * *\n\r" " ******************************************************" ; initenvironment(); initvga(); splashscreen(msgpicture); splashscreen(msgwelcome); shell(); while ( 1 ); }.

Let me brief about the functions:









Below are the screen shots of the kernel that has been loaded by the bootloader.

Using the Source Code:

Attached is the file sourcecode.tar.gz which contains the required source files and also the necessary directories to generate the binaries.

So please make sure that you are the super user of the system and then start unzipping the files into a directory or folder.

Make sure that you install bochs-x64 emulator and GNU bin-utils to proceed further with compiling and testing the source code.

Below is the directory structure you will see once you extract the files from the zip.

There should be 5 directories

Once the environment is ready, make sure you open a terminal and then run the following commands

Screen shots for your reference:

This is the first screen that is being displayed while a kernel is executing.

Image 5

This is the welcome screen of the kernel

Image 6

This is the command prompt I have some how tried to display on the screen so that the user can input some text.

Image 7

This is the screen shot of the commands entering by the user and the screen scrolls as required when the user hits return key.

Image 8

Also please let me know if you are facing any issues. I would be more than happy to help.


I hope this article gives you a picture of using a file system and its importance in operating system. Also, hope this article helps you writing a boot-loader to parse a file system and how to write a 16-bit kernel in C/C++. If you like the code you can try to edit the code and then try embedding more functionality into it.

It should be fun doing this. See you again :)

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Comments and Discussions

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

write a kernel

write a kernel

What is a kernel?

The kernel is the essential foundation of a computer's operating system ( OS ). It is the core that provides basic services for all other parts of the OS. It is the main layer between the OS and underlying computer hardware, and it helps with tasks such as process and memory management , file systems, device control and networking.

During normal system startup, a computer's basic input/output system, or BIOS , completes a hardware bootstrap or initialization. It then runs a bootloader which loads the kernel from a storage device -- such as a hard drive -- into a protected memory space. Once the kernel is loaded into computer memory, the BIOS transfers control to the kernel. It then loads other OS components to complete the system startup and make control available to users through a desktop or other user interface.

If the kernel is damaged or cannot load successfully, the computer will be unable to start completely -- if at all. This will require service to correct hardware damage or restore the operating system kernel to a working version.

Kernel architecture

What is the purpose of the kernel?

In broad terms, an OS kernel performs three primary jobs.

In more granular terms, accomplishing these three kernel functions involves a range of computer tasks, including the following:

Scheduling and management are central to the kernel's operation. Computer hardware can only do one thing at a time. However, a computer's OS components and applications can spawn dozens and even hundreds of processes that the computer must host. It's impossible for all of those processes to use the computer's hardware -- such as a memory address or CPU instruction pipeline -- at the same time. The kernel is the central manager of these processes. It knows which hardware resources are available and which processes need them. It then allocates time for each process to use those resources.

The kernel is critical to a computer's operation, and it requires careful protection within the system's memory. The kernel space it loads into is a protected area of memory. That protected memory space ensures other applications and data don't overwrite or impair the kernel, causing performance problems, instability or other negative consequences. Instead, applications are loaded and executed in a generally available user memory space.

A kernel is often contrasted with a shell , which is the outermost part of an OS that interacts with user commands. Kernel and shell are terms used more frequently in Unix OSes than in IBM mainframe and Microsoft Windows systems.

A kernel is not to be confused with a BIOS, which is an independent program stored on a chip within a computer's circuit board.

Device drivers

A key part of kernel operation is communication with hardware devices inside and outside of the physical computer. However, it is impractical to write an OS capable of interacting with every possible device in existence. Instead, kernels rely on the ability of device drivers to add kernel support for specialized devices, such as printers and graphics adapters.

When an OS is installed on a computer, the installation adds device drivers for any specific devices detected within the computer. This helps tailor the OS installation to the specific system with just enough components to support the devices present. When a new or better device replaces an existing device, the device driver is updated or replaced .

There are several types of device drivers. Each addresses a different data transfer type. The following are some of the main driver types:

Device drivers are classified as kernel or user. A kernel mode device driver is a generic driver that is loaded along with the OS. These drivers are often suited to small categories of major hardware devices, such as CPU and motherboard device drivers.

User mode device drivers encompass an array of ad hoc drivers used for aftermarket, user-added devices, such as printers, graphics adapters, mice, advanced sound systems and other plug-and-play devices.

The OS needs the code that makes up the kernel. Consequently, the kernel code is usually loaded into an area in the computer storage that is protected so that it will not be overlaid with less frequently used parts of the OS.

Kernel mode vs. user mode

Computer designers have long understood the importance of security and the need to protect critical aspects of the computer's behavior. Long before the internet, or even the emergence of networks, designers carefully managed how software components accessed system hardware and resources. Processors were developed to support two operating modes: kernel mode and user mode.

Kernel mode

Kernel mode refers to the processor mode that enables software to have full and unrestricted access to the system and its resources. The OS kernel and kernel drivers, such as the file system driver, are loaded into protected memory space and operate in this highly privileged kernel mode.

User mode refers to the processor mode that enables user-based applications, such as a word processor or video game, to load and execute. The kernel prepares the memory space and resources for that application's use and launches the application within that user memory space.

User mode applications are less privileged and cannot access system resources directly. Instead, an application running in user mode must make system calls to the kernel to access system resources. The kernel then acts as a manager, scheduler and gatekeeper for those resources and works to prevent conflicting resource requests.

The processor switches to kernel mode as the kernel processes its system calls and then switches back to user mode to continue operating the application(s).

It's worth noting that kernel and user modes are processor states and have nothing to do with actual solid-state memory. There is nothing intrinsically safe or protected about the memory used for kernel mode. Kernel driver crashes and memory failures within the kernel memory space can still crash the OS and the computer.

Types of kernels

Kernels fall into three architectures: monolithic, microkernel and hybrid. The main difference between these types is the number of address spaces they support.

Overall, these kernel implementations present a tradeoff -- admins get the flexibility of more source code with microkernels or they get increased security without customization options with the monolithic kernel.

Some specific differences among the three kernel types include the following:


Microkernels have all of their services in the kernel address space. For their communication protocol, microkernels use message passing, which sends data packets, signals and functions to the correct processes. Microkernels also provide greater flexibility than monolithic kernels; to add a new service, admins modify the user address space for a microkernel.

Because of their isolated nature, microkernels are more secure than monolithic kernels. They remain unaffected if one service within the address space fails.

Monolithic kernels

Monolithic kernels are larger than microkernels, because they house both kernel and user services in the same address space. Monolithic kernels use a faster system call communication protocol than microkernels to execute processes between the hardware and software. They are less flexible than microkernels and require more work; admins must reconstruct the entire kernel to support a new service.

Monolithic kernels pose a greater security risk to systems than microkernels because, if a service fails, then the entire system shuts down. Monolithic kernels also don't require as much source code as a microkernel, which means they are less susceptible to bugs and need less debugging.

The Linux kernel is a monolithic kernel that is constantly growing; it had 20 million lines of code in 2018. From a foundational level, it is layered into a variety of subsystems. These main groups include a system call interface, process management, network stack, memory management, virtual file system and device drivers.

Administrators can port the Linux kernel into their OSes and run live updates. These features, along with the fact that Linux is open source , make it more suitable for server systems and environments that require real-time maintenance.

Hybrid kernels

Apple developed the XNU OS kernel in 1996 as a hybrid of the Mach and Berkeley Software Distribution ( BSD ) kernels and paired it with an Objective-C application programming interface or API. Because it is a combination of the monolithic kernel and microkernel, it has increased modularity, and parts of the OS gain memory protection.

Diagram of user and kernel address space in Windows 10

History and development of the kernel

Before the kernel, developers coded actions directly to the processor, instead of relying on an OS to complete interactions between hardware and software.

The first attempt to create an OS that used a kernel to pass messages was in 1969 with the RC 4000 Multiprogramming System. Programmer Per Brinch Hansen discovered it was easier to create a nucleus and then build up an OS, instead of converting existing OSes to be compatible with new hardware. This nucleus -- or kernel -- contained all source code to facilitate communications and support systems, eliminating the need to directly program on the CPU.

After RC 4000, Bell Labs researchers started work on Unix, which radically changed OS development and kernel development and integration. The goal of Unix was to create smaller utilities that do specific tasks well instead of having system utilities try to multitask. From a user standpoint, this simplifies creating shell scripts that combine simple tools.

As Unix adoption increased, the market started to see a variety of Unix-like computer OSes, including BSD, NeXTSTEP and Linux. Unix's structure perpetuated the idea that it was easier to build a kernel on top of an OS that reused software and had consistent hardware, instead of relying on a time-shared system that didn't require an OS.

Unix brought OSes to more individual systems, but researchers at Carnegie Mellon expanded kernel technology. From 1985 to 1994, they expanded work on the Mach kernel. Unlike BSD, the Mach kernel is OS-agnostic and supports multiple processor architectures. Researchers made it binary-compatible with existing BSD software, enabling it to be available for immediate use and continued experimentation.

The Mach kernel's original goal was to be a cleaner version of Unix and a more portable version of Carnegie Mellon's Accent interprocessor communications ( IPC ) kernel. Over time, the kernel brought new features, such as ports and IPC-based programs, and ultimately evolved into a microkernel.

Shortly after the Mach kernel, in 1986, Vrije Universiteit Amsterdam developer Andrew Tanenbaum released MINIX (mini-Unix) for educational and research uses. This distribution contained a microkernel-based structure, multitasking, protected mode, extended memory support and an American National Standards Institute C compiler .

The next major advancement in kernel technology came in 1992, with the release of the Linux kernel. Founder Linus Torvalds developed it as a hobby, but he still licensed the kernel under general public license , making it open source. It was first released with 176,250 lines of code.

The majority of OSes -- and their kernels -- can be traced back to Unix, but there is one outlier: Windows. With the popularity of DOS - and IBM-compatible PCs, Microsoft developed the NT kernel and based its OS on DOS. That is why writing commands for Windows differs from Unix-based systems.

Learn more about the differences between monolithic and microkernel architectures .

Continue Reading About kernel

Related Terms

Dig deeper on data center ops, monitoring and management.

write a kernel

device driver


kernel panic


real-time operating system (RTOS)

When time is of the essence, add parallel processing to your PowerShell scripts to perform tasks more efficiently by using the ...

Learn how to check if your machines have pending reboots with a simple PowerShell module to ensure changes to files do not cause ...

A basic administrative skill is checking over logs to find out why something broke. Learn how to build a proper logging mechanism...

Amazon CodeGuru reviews code and suggests improvements to users looking to make their code more efficient as well as optimize ...

Establishing sound multi-cloud governance practices can mitigate challenges and enforce security. Review best practices and tools...

Workloads with rigid latency, bandwidth, availability or integration requirements tend to perform better -- and cost less -- if ...

Vast Data Universal Storage brought out data services, including set performance, metadata cataloging, better security, container...

For organizations that use Google Cloud Storage, it's crucial to store data in the optimal class. Understand top uses, benefits ...

In the debates of hard drives vs. flash and on vs. off premises, the answer is often not just one over the other. However, ...

Related Articles

Kernel in Operating System

Kernel is central component of an operating system that manages operations of computer and hardware. It basically manages operations of memory and CPU time. It is core component of an operating system. Kernel acts as a bridge between applications and data processing performed at hardware level using inter-process communication and system calls. 

Kernel loads first into memory when an operating system is loaded and remains into memory until operating system is shut down again. It is responsible for various tasks such as disk management, task management, and memory management. 

 Kernel has a process table that keeps track of all active processes • Process table contains a per process region table whose entry points to entries in region table.

 Kernel loads an executable file into memory during ‘exec’ system call’.

It decides which process should be allocated to processor to execute and which process should be kept in main memory to execute. It basically acts as an interface between user applications and hardware. The major aim of kernel is to manage communication between software i.e. user-level applications and hardware i.e., CPU and disk memory. 

Objectives of Kernel :    

Types of Kernel :  

1. Monolithic Kernel –  

It is one of types of kernel where all operating system services operate in kernel space. It has dependencies between systems components. It has huge lines of code which is complex. 


Advantage: It has good performance.  

Disadvantage:   It has dependencies between system component and lines of code in millions. 

2. Micro Kernel –   It is kernel types which has minimalist approach. It has virtual memory and thread scheduling. It is more stable with less services in kernel space. It puts rest in user space. 

Example :    

3. Hybrid Kernel –   It is the combination of both monolithic kernel and microkernel. It has speed and design of monolithic kernel and modularity and stability of microkernel. 

4. Exo Kernel –   It is the type of kernel which follows end-to-end principle. It has fewest hardware abstractions as possible. It allocates physical resources to applications. 

5. Nano Kernel –   It is the type of kernel that offers hardware abstraction but without system services. Micro Kernel also does not have system services therefore the Micro Kernel and Nano Kernel have become analogous. 

Please Login to comment...

Improve your Coding Skills with Practice

Start your coding journey now.

Page Contents


In this series of articles I describe how you can write a Linux kernel module for an embedded Linux device. I begin with a straightforward “Hello World!” loadable kernel module (LKM) and work towards developing a module that can control GPIOs on an embedded Linux device (such as the BeagleBone) through the use of IRQs. I will add further follow-up articles as I identify suitable applications.

This is a complex topic that will take time to work through. Therefore, I have broken the discussion up over a number of articles, each providing a practical example and outcome. There are entire books written on this topic, so it will be difficult to cover absolutely every aspect. There are also other articles available on writing kernel modules; however, the examples presented here are built and tested under the Linux kernel 3.8.X+, ensuring that the material is up to date and relevant, and I have focused on interfacing to hardware on embedded systems. I have also aligned the tasks performed against my book, Exploring BeagleBone , albeit the articles are self-contained and do not require that you own a copy of the book.

This article is focused on the system configuration, tools and code required to build and deploy a “Hello World!” kernel module. The second article in this series examines the topic of writing character device drivers and how to write C/C++ programs in user space that can communicate with kernel space modules. The third article examines the use of the kernel space GPIO library code — it combines the content of the first two articles to develop interrupt-driven code that can be controlled from Linux user space. For example, Figure 1 illustrates an oscilloscope capture of an interrupt-driven kernel module that triggers an LED to light when a button is pressed (click for a larger version). Under regular embedded Linux (i.e., not a real-time variant), this code demonstrates a response time of approximately 20 microseconds (±5μs), with negligible CPU overhead.

What is a Kernel Module?

A loadable kernel module (LKM) is a mechanism for adding code to, or removing code from, the Linux kernel at run time. They are ideal for device drivers, enabling the kernel to communicate with the hardware without it having to know how the hardware works. The alternative to LKMs would be to build the code for each and every driver into the Linux kernel.

Figure 2 : Linux user space and kernel space

Without this modular capability, the Linux kernel would be very large, as it would have to support every driver that would ever be needed on the BBB. You would also have to rebuild the kernel every time you wanted to add new hardware or update a device driver. The downside of LKMs is that driver files have to be maintained for each device. LKMs are loaded at run time, but they do not execute in user space — they are essentially part of the kernel.

Kernel modules run in kernel space and applications run in user space, as illustrated in Figure 2. Both kernel space and user space have their own unique memory address spaces that do not overlap. This approach ensures that applications running in user space have a consistent view of the hardware, regardless of the hardware platform. The kernel services are then made available to the user space in a controlled way through the use of system calls. The kernel also prevents individual user-space applications from conflicting with each other or from accessing restricted resources through the use of protection levels (e.g., superuser versus regular user permissions).

Why Write a Kernel Module?

When interfacing to electronics circuits under embedded Linux you are exposed to sysfs and the use of low-level file operations for interfacing to electronics circuits. This approach can appear to be inefficient (especially if you have experience of traditional embedded systems); however, these file entries are memory mapped and the performance is sufficient for many applications. I have demonstrated in my book that it is possible to achieve response times of about one third of a millisecond, with negligible CPU overhead, from within Linux user space by using pthreads, callback functions and sys/poll.h .

An alternative approach is to use kernel code, which has support for interrupts. However, kernel code is difficult to write and debug. My advice is that you should always to try to accomplish your task in Linux user space, unless you are certain that there is no other possible way!

Source Code for this Discussion

All of the code for this discussion is available in the GitHub repository for the book Exploring BeagleBone. The code can be viewed publicly at: the ExploringBB GitHub Kernel Project directory , and/or you can clone the repository on your BeagleBone (or other Linux device) by typing:

The /extras/kernel/hello directory is the most important resource for this article. The auto-generated Doxygen documentation for these code examples is available in HTML format and PDF format .

Prepare the System for Building LKMs

The system must be prepared to build kernel code, and to do this you must have the Linux headers installed on your device. On a typical Linux desktop machine you can use your package manager to locate the correct package to install. For example, under 64-bit Debian you can use: [email protected]:~$ sudo apt-get update [email protected]:~$ apt-cache search linux-headers-$(uname -r) linux-headers-3.16.0-4-amd64 - Header files for Linux 3.16.0-4-amd64 [email protected]:~$ sudo apt-get install linux-headers-3.16.0-4-amd64 [email protected]:~$ cd /usr/src/linux-headers-3.16.0-4-amd64/ [email protected]:/usr/src/linux-headers-3.16.0-4-amd64$ ls arch include Makefile Module.symvers scripts

You can complete the first two articles in this series using any flavor of desktop Linux. However, in this series of articles I build the LKM on the BeagleBone itself, which simplifies the process when compared to cross-compiling. You must install the headers for the exact version of your kernel build . Similar to the desktop installation, use uname to identify the correct installation. For example: [email protected]:~$ uname -a Linux beaglebone 3.8.13-bone70 #1 SMP Fri Jan 23 02:15:42 UTC 2015 armv7l GNU/Linux

You can download the Linux headers for the BeagleBone platform from Robert Nelson’s website. For example, at: http://rcn-ee.net/deb/precise-armhf/ . Choose the exact kernel build, and download and install those Linux-headers on your BeagleBone. For example: [email protected]:~/tmp$ wget http://rcn-ee.net/deb/precise-armhf/v3.8.13-bone70/linux-headers-3.8.13-bo ne70_1precise_armhf.deb 100%[===========================>] 8,451,080 2.52M/s in 3.2s 2015-03-17 22:35:45 (2.52 MB/s) - 'linux-headers-3.8.13-bone70_1precise_armhf.deb' saved [8451080/8451080] [email protected]:~/tmp$ sudo dpkg -i ./linux-headers-3.8.13-bone70_1precise_armhf.deb Selecting previously unselected package linux-headers-3.8.13-bone70

Under the 3.8.13-bone47 Debian distribution for the BeagleBone, you may have to perform an unusual step of creating an empty file timex.h (i.e., touch timex.h ) in the directory /usr/src/linux-headers-3.8.13-bone47/arch/arm/ include/mach . This step is not necessary under the bone70 build.

It is very easy to crash the system when you are writing and testing LKMs. It is always possible that such a system crash could corrupt your file system — it is unlikely, but it is possible. Please back up your data and/or use an embedded system, such as the BeagleBone, which can easily be re-flashed. Performing a sudo reboot , or pressing the reset button on the BeagleBone will usually put everything back in order. No BeagleBones were corrupted in the writing of these articles despite many, many system crashes!

The Module Code

The run-time life cycle of a typical computer program is reasonably straightforward. A loader allocates memory for the program, then loads the program and any required shared libraries. Instruction execution begins at some entry point (typically the main() point in C/C++ programs), statements are executed, exceptions are thrown, dynamic memory is allocated and deallocated, and the program eventually runs to completion. On program exit, the operating system identifies any memory leaks and frees lost memory to the pool.

A kernel module is not an application — for a start there is no main() function! Some of the key differences are that kernel modules:

The concepts above are a lot to digest and it is important that they are all addressed, but not all in the first article! Listing 1 provides the code for a first example LKM. When no kernel argument is provided, the code uses the printk() function to display “Hello world!…” in the kernel logs. If the argument “Derek” is provided, then the logs will display “Hello Derek!…” The comments in Listing 1, which are written using a Doxygen style, describe the role of each statement. Further description is available after the code listing below.

In addition to the points described by the comments in Listing 1, there are some additional points:

The next step is to build this code into a kernel module.

Building the Module Code

A Makefile is required to build the kernel module — in fact, it is a special kbuild Makefile . The kbuild Makefile required to build the kernel module in this article can be viewed in Listing 2.

The first line of this Makefile is called a goal definition and it defines the module to be built ( hello.o ). The syntax is surprisingly intricate, for example obj-m defines a loadable module goal, whereas obj-y indicates a built-in object goal. The syntax becomes more complex when a module is to be built from multiple objects, but this is sufficient to build this example LKM.

The reminder of the Makefile is similar to a regular Makefile. The $(shell uname -r) is a useful call to return the current kernel build version — this ensures a degree of portability for the Makefile. The -C option switches the directory to the kernel directory before performing any make tasks. The M=$(PWD) variable assignment tells the make command where the actual project files exist. The modules target is the default target for external kernel modules. An alternative target is modules_install which would install the module (the make command would have to be executed with superuser permissions and the module installation path is required).

All going well, the process to build the kernel module should be straightforward, provided that you have installed the Linux headers as described earlier. The steps are as follows: [email protected]:~/exploringBB/extras/kernel/hello$ ls -l total 8 -rw-r--r-- 1 molloyd molloyd 154 Mar 17 17:47 Makefile -rw-r--r-- 1 molloyd molloyd 2288 Apr 4 23:26 hello.c [email protected]:~/exploringBB/extras/kernel/hello$ make make -C /lib/modules/3.8.13-bone70/build/ M=/home/molloyd/exploringBB/extras/kernel/hello modules make[1]: Entering directory '/usr/src/linux-headers-3.8.13-bone70' CC [M] /home/molloyd/exploringBB/extras/kernel/hello/hello.o Building modules, stage 2. MODPOST 1 modules CC /home/molloyd/exploringBB/extras/kernel/hello/hello.mod.o LD [M] /home/molloyd/exploringBB/extras/kernel/hello/hello.ko make[1]: Leaving directory '/usr/src/linux-headers-3.8.13-bone70' [email protected]:~/exploringBB/extras/kernel/hello$ ls Makefile Module.symvers hello.c hello.ko hello.mod.c hello.mod.o hello.o modules.order You can see that there is now a hello loadable kernel module in the build directory with the file extension .ko .

Testing the LKM

This module can now be loaded using the kernel module tools as follows: [email protected]:~/exploringBB/extras/kernel/hello$ ls -l *.ko -rw-r--r-- 1 molloyd molloyd 4219 Apr 4 23:27 hello.ko [email protected]:~/exploringBB/extras/kernel/hello$ sudo insmod hello.ko [email protected]:~/exploringBB/extras/kernel/hello$ lsmod Module Size Used by hello 972 0 g_multi 50407 2 libcomposite 15028 1 g_multi omap_rng 4062 0 mt7601Usta 639170 0

You can get information about the module using the modinfo command, which will identify the description, author and any module parameters that are defined: [email protected]:~/exploringBB/extras/kernel/hello$ modinfo hello.ko filename: /home/molloyd/exploringBB/extras/kernel/hello/hello.ko description: A simple Linux driver for the BBB. author: Derek Molloy license: GPL srcversion: 9E3F5ECAB0272E3314BEF96 depends: vermagic: 3.8.13-bone70 SMP mod_unload modversions ARMv7 thumb2 p2v8 parm: name:The name to display in /var/log/kernel.log. (charp)

The module can be unloaded using the rmmod command: [email protected]:~/exploringBB/extras/kernel/hello$ sudo rmmod hello.ko

You can repeat these steps and view the output in the kernel log that results from the use of the printk() function. I recommend that you use a second terminal window and view the output as your LKM is loaded and unloaded, as follows: [email protected]:~$ sudo su - [sudo] password for molloyd: [email protected]:~# cd /var/log [email protected]:/var/log# tail -f kern.log ... Apr 4 23:34:32 beaglebone kernel: [21613.495523] EBB: Hello world from the BBB LKM! Apr 4 23:35:17 beaglebone kernel: [21658.306647] EBB: Goodbye world from the BBB LKM! ^C [email protected]:/var/log#

Testing the LKM Custom Parameter

The code in Listing 1 also contains a custom parameter, which allows an argument to be passed to the kernel module on initialization. This feature can be tested as follows: [email protected]:~/exploringBB/extras/kernel/hello$ sudo insmod hello.ko name=Derek If you view /var/log/kern.log at this point then you will see “Hello Derek” in place of “Hello world”. However, it is worth having a look at /proc and /sys first.

Rather than using the lsmod command, you can also find out information about the kernel module that is loaded, as follows: [email protected]:~/exploringBB/extras/kernel/hello$ cd /proc [email protected]:/proc$ cat modules|grep hello hello 972 0 - Live 0xbf903000 (O) This is the same information that is provided by the lsmod command but it also provides the current kernel memory offset for the loaded module, which is useful for debugging.

The LKM also has an entry under /sys/module , which provides you with direct access to the custom parameter state. For example: [email protected]:/proc$ cd /sys/module [email protected]:/sys/module$ ls -l|grep hello drwxr-xr-x 6 root root 0 Apr 5 00:02 hello [email protected]:/sys/module$ cd hello [email protected]:/sys/module/hello$ ls -l total 0 -r--r--r-- 1 root root 4096 Apr 5 00:03 coresize drwxr-xr-x 2 root root 0 Apr 5 00:03 holders -r--r--r-- 1 root root 4096 Apr 5 00:03 initsize -r--r--r-- 1 root root 4096 Apr 5 00:03 initstate drwxr-xr-x 2 root root 0 Apr 5 00:03 notes drwxr-xr-x 2 root root 0 Apr 5 00:03 parameters -r--r--r-- 1 root root 4096 Apr 5 00:03 refcnt drwxr-xr-x 2 root root 0 Apr 5 00:03 sections -r--r--r-- 1 root root 4096 Apr 5 00:03 srcversion -r--r--r-- 1 root root 4096 Apr 5 00:03 taint --w------- 1 root root 4096 Apr 5 00:02 uevent -r--r--r-- 1 root root 4096 Apr 5 00:02 version [email protected]:/sys/module/hello$ cat version 0.1 [email protected]:/sys/module/hello$ cat taint O The version value is 0.1 as per the MODULE_VERSION("0.1") entry and the taint value is 0 as per the license that has been chosen, which is MODULE_LICENSE("GPL") .

The custom parameter can be viewed as follows: [email protected]:/sys/module/hello$ cd parameters/ [email protected]:/sys/module/hello/parameters$ ls -l total 0 -r--r--r-- 1 root root 4096 Apr 5 00:03 name [email protected]:/sys/module/hello/parameters$ cat name Derek You can see that the state of the name variable is displayed, and that superuser permissions where not required to read the value. The latter is due to the S_IRUGO argument that was used in defining the module parameter. It is possible to configure this value for write access but your module code will need to detect such a state change and act accordingly. Finally, you can remove the module and observe the output: [email protected]:/sys/module/hello/parameters$ sudo rmmod hello.ko

As expected, this will result in the output message in the kernel logs: [email protected]:/var/log# tail -f kern.log … Apr 5 00:02:20 beaglebone kernel: [23281.070193] EBB: Hello Derek from the BBB LKM! Apr 5 00:08:18 beaglebone kernel: [23639.160009] EBB: Goodbye Derek from the BBB LKM!


Hopefully you have built your first loadable kernel module (LKM). Despite the simplicity of the functionality of this module there was a lot of material to cover — by the end of this article: you should have a broad idea of how loadable kernel modules work; you should have your system configured to build, load and unload such modules; and, you should be able to define custom parameters for your LKMs.

The next step is to build on this work to develop a kernel space LKM that can communicate with a user space C/C++ program by developing a basic character driver. See “ Writing a Linux Kernel Module — Part 2: A Character Device “. Then we can move on to the more interesting task of interacting with GPIOs.

Share This Story, Choose Your Platform!

About the author: derek, related posts.

Exploring BeagleBone: Tools and Techniques for Building with Embedded Linux


When I checked for my kernel version of Beaglebone Black using uname -a, it shows that “Linux beaglebone 3.8.13-bone71 #1 SMP Tue Mar 17 18:07:44 UTC 2015 armv7l GNU/Linux”. So, I tried to find the proper Linux headers and found it from https://rcn-ee.net/deb/wheezy-armhf/v3.8.13-bone71/ .

And downloaded it by command following ‘wget https://rcn-ee.net/deb/wheezy-armhf/v3.8.13-bone71/linux-headers-3.8.13-bone71_1wheezy_armhf.deb ‘.

Is this right for my BBB?

Thanks in advance, John

After installation with the Linux headers with bone71, I followed the sequence you stated, and I saw the same results with you.

I have installed the 3.8.13-bone47 headers but there is no “mach” file under include. Under “include” there are four files “asm debug generated uapi” and under “asm” there is the mach file.

should I create the file “timex.h” there??????

Hi there, you may have to create sub directories (I can’t remember if I did), but it should be empty and have the full path “/usr/src/linux-headers-3.8.13-bone47/arch/arm/include/mach/timex.h”. Kind regards, Derek.

created the “timex.h” in /asm/mach as I was getting error of “…./include/asm/mach/timex.h” — no such file or directory. This solved the problem and compiled & loaded the LKM successfully.

Thanks for the contributions.

Now I am having only three questions in mind. 1.Can I use interrupt in user space and do run time debug in eclipse ?(at present I am using interrupt in kernel space and compiling & running code through command terminal but not in eclipse IDE, i would like to dubug LKM code in eclipse IDE also.) 2. Can I make program which contains both user space and kernel space program? 3.Can I run kernel program in eclipse & do run time debug in eclipse ?

I have read a lot of articles written by you on Embedded Systems programming and all of them are utterly useful. You tutorial videos are crisp and clear, they go through the basics of the topic in question very smoothly. I would request you to please create a course on EdX or Coursera regarding Object Oriented embedded systems programming in the ARM platform. It would be really a great resource for all of us.

Thanks Regards Rish

These are really useful information. Very good to have such sites giving so much information about embedded systems.

Great article, Derek. Thanks for writing it.

Hi, These articles are AMAZING!!!! I’ve been trying so hard to find an easy to follow and read articles on the internet but failed miserably, just stumpled upon this artice on reddit. Thanks alot Dr.Derek!!!!

Derek: Well done! I’m going to use this in my class next week.


p.s. Why not use “apt-get install linux-headers-3.8.13-bone77” to load the headers?

Thanks Mark! The apt-get call wouldn’t work at the time — Glad to see that it is fixed! Kind regards, Derek.

great articles! Do i need a LKM to get the SGX-Module on Sitara runing on Ubuntu? Or, do you have an advice for me how it works

Hey there Derek, I’m using the BeagleBone black with: Linux beaglebone 3.8.13-bone47

Should i use the “v3.8.13-bone71/ “-headers which are available currenly? will the be suitable for my BeagleBone?

THANK YOU, for some awesome articles!!!

Okay, it turns out that the proper version of the header files is quite important I got the ones from http://rcn-ee.net/deb/trusty-armhf/v3.8.13-bone47/ and added the mach/timex.h file, and was the able to follow this guide. Thank you..

great tutorial,and great book. I would like to ask you if I want to cross-compile a device driver,i.e. on jessie host machine, which kind of “steps” I had to performs since the arch and config file are not the same. What I would like to do is ( if possible ) develop my dd and build it on host and then run on BBB. Could you tell me please where I could find some info about. Thank you Giorgio

It seems there a no .deb files anywhere under Robert Nelson’s website anymore — I’m just seeing patch .diff.gz files. Any idea where to get deb packages for the various Beaglebone black debian releases anymore?

I figured it out — maybe this will help someone if you approve this comment. I’m running an older 2014-04-23 debian image (from dogtag file) kernel 3.8.13-bone69

To get the kernel headers from Robert Nelsons repo I had to add this to /etc/apt/sources.list

deb [arch=armhf] http://rcn-ee.net/repos/debian wheezy main

Then apt-get update and apt-cache search kernel-headers reveals a whole bunch of headers available to install including my kernel 3.8.13-bone69

Now I’m going to try and compile the kenrel module Nathanial Lewis wrote that supports the TI eQEP encoder hardware.

Thanks for all you do — makes using the beaglebone platform a pleasure!

Hi Mr. Malloy,

I have looked for Linux headers for the BeagleBone on Robert Nelson’s website to no avail. Do you have any idea how to get those headers

Thanls for the good work

You should try my comment above by adding the line shown to /etc/apt/sources.list and do apt-get update. Robert Nelsons site is a debian repository and when the apt package system accesses it you will find the linux headers no problem. Good luck!

Thanks for the advice.

The problem was solved by using the latest kernel: bone79 Here is Robert Nelson explanation: bone50 was released around May 12, 2014.. It wasn’t till later that summer/fall 2014 did i get the repo up and running for every kernel release.

Hi Mr. Malloy, Here is the result:

[email protected]:~# sudo apt-get install linux-headers- uname -r Reading package lists… Done Building dependency tree Reading state information… Done E: Unable to locate package linux-headers-3.8.13-bone50 E: Couldn’t find any package by regex ‘linux-headers-3.8.13-bone50’

Problem solved by using the latest kernel: bone79

Here is Robert Nelson explanation:

bone50 was released around May 12, 2014..

It wasn’t till later that summer/fall 2014 did i get the repo up and running for every kernel release..

“wget http://rcn-ee.net/deb/precise-armhf/v3.8.13-bone70/linux-headers-3.8.13-bone70_1precise_armhf.deb ,” The file is gone from the web site. Would you please tell where I can get the file? I try to use 3.8.13-bone70 on Beaglebone Black now to develop mt test driver of I2C device.

Hi Derek, Really nice example of a device driver with interrupts! I am trying to build a LKM using a cross compiler on an Ubuntu Linux PC. My target is not exactly beaglebone black but a very similar board for our specific purposes – we use the TI am3358 processor and buildroot. I just took button.c and tried to compile it and I get the errors: 1. error: negative width in bit-field ” #define BUILD_BUG_ON_ZERO(e) (sizeof((struct { int:-!!(e); })) ^ I checked this file and since the __CHECKER__ flag is undefined it is going to this code… What might I be doing wrong? Thanks!

This is how I downloaded the Linux headers: 1) add this to the top of the file /etc/apt/sources.list deb [arch=armhf] http://rcn-ee.net/repos/debian wheezy main 2) apt-get update 3) apt-get upgrade 4)reboot 5)uname -r gave me 4.1.15-bone18 6)apt-get install linux-headers-4.1.15-bone18 7) The new linux headers are now in /usr/src/

Hi, I tried to write a LKM for epaper lcd display which uses SPI,PWM and GPIOs. After loading the linux devicetree for BBB, I get an error message in dmesg. the last few lines of dmesg [ 269.742198] check pwm [ 269.742273] /ocp/[email protected]/[email protected]: could not get #pwm-cells for /ocp/[email protected]/[email protected] [ 269.752250] epd: Cannot get pwm -22 [ 269.777286] Call epd_therm_remove() [ 269.781123] i2c temperature probe excluded [ 269.799667] epd: Fail to create COG-G1 [ 269.812813] prvdsp,g1-epd: probe of spi1.0 failed with error -22 May I get some help in device tree coding for BBB.

regards venkat

great tutorial sir…!

Hey Derek, I’m using the BeagleBone black with: Linux beaglebone 3.8.13-bone47

sudo apt-get update command give this error

W: There is no public key available for the following key IDs: 7638D0442B90D010 W: There is no public key available for the following key IDs: 7638D0442B90D010 W: GPG error: http://rcn-ee.net wheezy Release: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY D284E608A4C46402 W: There is no public key available for the following key IDs: 9D6D8F6BC857C906

Also unable to install header files . Please help me with this

I have the same problem. I’m also usingLinux beaglebone 3.8.13-bone47 and I get the same kind of error and it was not possible to download the headers. Any help?

Hi derek, when i make it then it shows the following error and my module doesn’t load make -C /lib/modules/3.8.13-bone79/build/ M=/home/debian/Desktop/exploringBB-master/extras/kernel/hello modules make: *** /lib/modules/3.8.13-bone79/build/: No such file or directory. Stop. make: *** [all] Error 2

In any of your writing, do you compare the merits of using LKMs and UIO (user-space I/O) for developing device drivers?

Just wondered whether your new Raspberry Pi book discusses this?

Best regards

i think we should avoid to develop the modules for kernel.i think we should try to connect the kernel with a simple and more popular programming language.my opinion is this programming language is complex of present,so i think the performance of programmers is going to high when they get the easy of help-information and isn’t wrecking on the code.i think i can know how to create an operating system and i know how it’s make a sense from transistors,we could make it more simpler of syntax,but why we didn’t?

Thank you, highly informative article. I have been running the example on 4.1.15-ti-r43 without any issue. The linux kernel headers were already available from the sd card image downloaded from beaglebone.org (dpkg -l | grep Linux-headers).

insmod:ERROR: could not insert module hello.ko:Required ker not available can you tell me what should I do now?

Derek, this is a great initiative and helps Linux Noobs like us migrate from bare metal to Linux OS.

In case you are having trouble installing the linux kernel headers, like file not available etc, then you can refer to the below link.


It is a different way to install the linux kernel headers of BBB.

Thank Derek once again for your knowledge sharing and this platform.

Thanks a lot for this tutorial Derek.

I must say really nice tutorials and you have done really well in organizing the stuffs. I have done some character driver programming on my Linux machine on my laptop.Now I want to interface some hardware and do driver programming on BeagleBone. The problem I am facing is I am not getting the Kernel Headers for the version of Linus installed on my BeagleBone.

I have 3.8.13-bone81 , but kernel headers I am not able to get. I would really appreciate if you can guide me in this respect.

Hey Derek, I Just want to ask about the linux headers for cross compiling Kernel module on the host machin. Here you write to download the linux headers of the host machin kernel version and in other tutorial was writen to download the linux headers of the BBB kernel version to the host machin. i also have other linux device that based on the AM335x and i succeed to cross compile kernel module and “insmod” it on the device with the explanation of the tutorial about the linux kernel headers of the target device. I Am very confusing about the cross compiling kernel module process. Also, if i want to compile on my BBB, i need to download the linux headers for my beagle kernel that is 3.8.13-bone50 but i cant find headers for that kernel.

Thanks, orenz


this website provides the commands for automatically downloading the proper headers for the kernel.

That’s great the tutorial. Thanks Mr Derek.

very nice and Informative, preparing for linux Certification, looking for some linux tutorial,found http://www.kerneltraining.com/linux-admin-training/ best institute, can anyone suggest me some books and videos.

Hi, I cross-compiled and run your code on 2.6.37 kernel, but it stucks for like 13 seconds after I do insmod. Do you have any idea?

Derek – Great writeup. Thanks for taking the time to spin this out. Much appreciated!

Thanks for the intro to developing a LKM. Just what I needed.

There is a typo in the description of line 21. You say the value of name is initialized to “hello”, but in the code it is initialized to “world”.

This article is really great, I am a student interested in exploit development (Windows and Linux). I really want to learn how to develop kernel modules (rootkits) like a professional. Can you give me some references to resourceful materials please.

“Lectures in object-oriented programming with embedded systems, digital and analog electronics, and 3D computer graphics. His research contributions are largely in the fields of computer and machine vision, 3D graphics, embedded systems, and e-Learning.” — YOU ARE THE PERSON I WANT TO BE!

Thanks for writing this

Wow. That is so elegant and logical and clearly explained. Keep it up! I follow up your blog for future post.

sir, I need your help while making drivers for my touchscreen in my custom kernel adnroid device melfas mip4 mms449 module touchscreen. would you help me for making these drivers.

Best Regards

i was able to compile and run my first kernel driver with this blog. thank you!

the website that you used to download the kernel header doesn’t seem to work anymore.

I used: https://elinux.org/Beagleboard:BeagleBoneBlack_Debian#Installing_kernel_headers

to download my headers.

Specifically used the following command to automatically download the correct linux header version that i needed: sudo apt-get install linux-headers- uname -r

thanks for this awesome tutorial,but i need one help i want know how to do cross compilation using ubuntu..

Great tutorial. We are trying external interrupt for NVidia TX2 (ARM). When I try to build the Hello project, I got this: *************************************************************** make -C /lib/modules/4.4.38-tegra/build/ M=/home/nvidia/Downloads/exploringBB/extras/kernel/hello modules make[1]: Entering directory ‘/lib/modules/4.4.38-tegra/build’ make[1]: *** No rule to make target ‘modules’. Stop. make[1]: Leaving directory ‘/lib/modules/4.4.38-tegra/build’ Makefile:4: recipe for target ‘all’ failed make: *** [all] Error 2 *************************************************************** I recall that when I do “$apt-cache search linux-headers-$(uname -r)”, I got nothing, however: ************************************************* [email protected]:~$ cd /usr/src/linux-headers-4.4.38-tegra/ [email protected]:/usr/src/linux-headers-4.4.38-tegra$ ls arch drivers ipc Makefile net sound block firmware Kbuild mm README System.map certs fs Kconfig modules.builtin samples tools crypto include kernel modules.order scripts usr Documentation init lib Module.symvers security virt [email protected]:/usr/src/linux-headers-4.4.38-tegra$ **************************************************

It seems cannot find the header files. Any suggestions?

I think you are missing the linux-header package. Try this command first: sudo apt-get install –reinstall linux-headers-$(uname -r) linux-headers-generic build-essential dkms git

I tried to strip -s the LKM to minimize his size as I usually do in normal binary. In inserting the module i get an error : Ivalid module format. and a dmesg log : module has no symbols (stripped?). Is it denied to strip a Kernel module ?

Best regards;

Hey, Thanks for this neat introduction.

I accidentally found the reason behind this weird syntax for modules and built-in codes, so I thought I’d share it with others. In the Makefile of every subdirectory, there is a line:

obj-$(CONFIG_MODULES) += module.o

or something very similar. Depending on whether you selected each part as built-in (i.e. said y in kernel config) or module (indicated by ‘m’), this variable gathers the appropriate object files.

Thanks again!

This was a very helpful article, thanks Derek! I’ve been looking for some good learning resources on Linux Kernel Modules and this was fantastic. Looking forward to going through the following articles. Cheers!

Good day to you mr Derek Molloy

I am currenty using Ip Fire version 2.21 with BASE OS linux kernel version 4.14.72

I have a Internal “ADSL PCI Modem” Identified as “Integrated Telecom Express” but no drivers modued loaded

I was following your blog from part 1 and part 2 http://derekmolloy.ie/writing-a-linux-kernel-module-part-2-a-character-device/

It is impossible for me to install “linux-headers” dedicated for IPFIRE using wget command …it is not working on ipdfire Is is working maynbe for Unbuntu ” Debian only”

Can you help me and assist me building kernel drivers module for linux kernel version 4.14.72 and Ip Fire version 2.21

Receive my regards and thanks in advance

i just go through your article it’s very interesting time just pass away by reading your article looking for more updates. Thank you for sharing.Naresh IT

A really excellent tutorial. Many thanks Derek.

the best tutorial i’ve ever seen,thank you so much

I am using Phytech-wega board where i am try to download the kernel header but face problem.Can you suggest me ,where from i can download the kernel header .

[email protected]:~ uname -r 3.2.0-PD13.0.0

Current [email protected] *

Leave this field empty

New, June 2016!

Recent Posts

How do I write a Loadable kernel module for blocking USB ports?

write a kernel

I am working on a proctor project for college where we have to restrict certain things during a test like the internet and such. One of the things is to block or shutdown all USB ports. Could you tell me how I go about doing that. I have been studying module programming in general but other than that any search on the web regarding USB programming doesn't seem relevant to me.

Any help in resources where I can study about USB manipulation would be great.

' src=

I assume your users don’t have sudo rights. I think kernel module is a bit of a nuclear option.

You can write udev rules to prevent certain kinds of devices from registering. Or you can power down usb controller (check sysfs for the controls).

Actually this is my college project so the modules are a necessity. Any help with the resources or guidance with how I get started cause I been studying Linux kernel module programming guide , but I don't know how to go from there.

Actually would it be possible to write udev rules in a module??

I think what you are looking for is https://usbguard.github.io/

Even if you write a module using misc register, I don't know if it is possible to access another subsystem like USB. Maybe you can interact with USB core with PM API's (power management) in some manner. I'm curious how you will do that...

The requirements are not clear:

Are the USB ports supposed to be blocked only during the tests or throughout the boot?

Do all USBs need to be blocked or is the blocking conditional

Do you have access on the system to install and load kernel modules

There's 2 ways to do this (as I see it) :

USB driver : Write a basic char module which is loaded on USB controller. Have basic file operations on this driver. Whenever a request comes from the filesystem, fail all the requests (e.g : Open, Create, Write, Read, IOCTL) and voila, you have restricted access to all the USB devices.

Or you could just not register any devices but a problem here is that they arent seen in /dev. Which I'm not sure you want or not?

To do this successfully, you would have to modify udev rules so that the generic driver isn't loaded and your driver is loaded. But be aware this will drop all IOs to all USBs that means your keyboard, Mouse will also stop working.

File system filter driver : Here your module will load on top of the file system. Based on where the IO is being targeted you would either fail the IO or pass it down the stack. You could modify something like http://www.redirfs.org/

The USB ports are supposed to be blocked only during the test. And I do have access to the system for installation and loading. Actually your first method is better because I need access to the keyboard and mouse. Can you point me towards resources where I may learn this or will my present source of Linux module programming guide suffice in giving me a decent start.

im not by any stretch an expert but in most systems ive played with usbs interact with the file system in /mnt/. is there a way to edit filesystem permissions in kernel modules? my thought is that you would need explicitly allow any new device found in /mnt/

Dayum that's a different approach. I could try to manipulate filesystem permissions rather than blocking ports. This way I won't have to worry about turning the keyboard and mouse back on. Thanks for the help man. I'll look into this.

About Community

Subreddit Icon

Ranked by Size

You are using an outdated browser. Please upgrade your browser to improve your experience and security.

Linux debugging

Check our new training course

Linux debugging, tracing, profiling & perf. analysis

Elixir Cross Referencer

Defined in 1 files as a prototype:, defined in 1 files as a function:, referenced in 18 files:.

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Write a Hello World Windows Driver (KMDF)

This article describes how to write a small Universal Windows driver using Kernel-Mode Driver Framework (KMDF) and then deploy and install your driver on a separate computer.

To get started, be sure you have Microsoft Visual Studio , the Windows SDK , and the Windows Driver Kit (WDK) installed.

Debugging Tools for Windows is included when you install the WDK.

Create and build a driver

Open Microsoft Visual Studio. On the File menu, choose New > Project .

In the Create a new project dialog box, select C++ in the left dropdown, choose Windows in the middle dropdown, and choose Driver in the right dropdown.

Select Kernel Mode Driver, Empty (KMDF) from the list of project types. Select Next .

Screen shot of the Visual Studio new project dialog box, showing kernel mode driver selected.

In the Configure your new project dialog box, enter "KmdfHelloWorld" in the Project name field.

When you create a new KMDF or UMDF driver, you must select a driver name that has 32 characters or less. This length limit is defined in wdfglobals.h.

In the Location field, enter the directory where you want to create the new project.

Check Place solution and project in the same directory and select Create .

Screen shot of the Visual Studio configure your new project configuration dialog box. The Create button is highlighted.

Visual Studio creates one project and a solution. You can see them in the Solution Explorer window. (If the Solution Explorer window isn't visible, choose Solution Explorer from the View menu.) The solution has a driver project named KmdfHelloWorld.

Screen shot of the Visual Studio solution explorer window, showing the solution and the empty driver project KmdfHelloWorld.

In the Solution Explorer window, select and hold (or right-click) the KmdfHelloWorld project and choose Configuration Manager . Choose a configuration and platform for the driver project. For example, choose Debug and x64 .

In the Solution Explorer window, again select and hold (or right-click) the KmdfHelloWorld project, choose Add , and then select New Item .

In the Add New Item dialog box, select C++ File . For Name , enter "Driver.c".

The file name extension is .c , not .cpp .

Select Add . The Driver.c file is added under Source Files , as shown here.

Screen shot of the Visual Studio solution explorer window, showing the driver.c file added to the driver project.

Write your first driver code

Now that you've created your empty Hello World project and added the Driver.c source file, you'll write the most basic code necessary for the driver to run by implementing two basic event callback functions.

In Driver.c, start by including these headers:

If you can't add Ntddk.h , open Configuration -> C/C++ -> General -> Additional Include Directories and add C:\Program Files (x86)\Windows Kits\10\Include\<build#>\km , replacing <build#> with the appropriate directory in your WDK installation.

Ntddk.h contains core Windows kernel definitions for all drivers, while Wdf.h contains definitions for drivers based on the Windows Driver Framework (WDF).

Next, provide declarations for the two callbacks you'll use:

Use the following code to write your DriverEntry :

DriverEntry is the entry point for all drivers, like Main() is for many user mode applications. The job of DriverEntry is to initialize driver-wide structures and resources. In this example, you printed "Hello World" for DriverEntry , configured the driver object to register your EvtDeviceAdd callback's entry point, then created the driver object and returned.

The driver object acts as the parent object for all other framework objects you might create in your driver, which include device objects, I/O queues, timers, spinlocks, and more. For more information about framework objects, see Introduction to Framework Objects .

For DriverEntry , we strongly recommend keeping the name as "DriverEntry" to help with code analysis and debugging.

Next, use the following code to write your KmdfHelloWorldEvtDeviceAdd :

EvtDeviceAdd is invoked by the system when it detects that your device has arrived. Its job is to initialize structures and resources for that device. In this example, you simply printed out a "Hello World" message for EvtDeviceAdd , created the device object, and returned. In other drivers you write, you might create I/O queues for your hardware, set up a device context storage space for device-specific information, or perform other tasks needed to prepare your device.

For the device add callback, notice how you named it with your driver's name as a prefix ( KmdfHelloWorld EvtDeviceAdd). Generally, we recommend naming your driver's functions in this way to differentiate them from other drivers' functions. DriverEntry is the only one you should name exactly that.

Your complete Driver.c now looks like this:

Save Driver.c.

This example illustrates a fundamental concept of drivers: they're a "collection of callbacks" that, once initialized, sit and wait for the system to call them when it needs something. A system call could be a new device arrival event, an I/O request from a user mode application, a system power shutdown event, a request from another driver, or a surprise removal event when a user unplugs the device unexpectedly. Fortunately, to say "Hello World," you only needed to worry about driver and device creation.

Next, you'll build your driver.

Build the driver

In the Solution Explorer window, select and hold (or right-click) Solution 'KmdfHelloWorld' (1 project) and choose Configuration Manager . Choose a configuration and platform for the driver project. For this exercise, we choose Debug and x64 .

In the Solution Explorer window, select and hold (or right-click) KmdfHelloWorld and choose Properties . In Wpp Tracing > All Options , set Run Wpp tracing to No . Select Apply and then OK .

To build your driver, choose Build Solution from the Build menu. Visual Studio shows the build progress in the Output window. (If the Output window isn't visible, choose Output from the View menu.) When you've verified that the solution built successfully, you can close Visual Studio.

To see the built driver, in File Explorer, go to your KmdfHelloWorld folder, and then to C:\KmdfHelloWorld\x64\Debug\KmdfHelloWorld . The folder includes:

If you see DriverVer set to a date in the future when building your driver, change your driver project settings so that Inf2Cat sets /uselocaltime . To do so, use Configuration Properties->Inf2Cat->General->Use Local Time . Now both Stampinf and Inf2Cat use local time.

Deploy the driver

Typically when you test and debug a driver, the debugger and the driver run on separate computers. The computer that runs the debugger is called the host computer , and the computer that runs the driver is called the target computer . The target computer is also called the test computer .

So far you've used Visual Studio to build a driver on the host computer. Now you need to configure a target computer.

Follow the instructions in Provision a computer for driver deployment and testing (WDK 10) .

When you follow the steps to provision the target computer automatically using a network cable, take note of the port and key. You'll use them later in the debugging step. In this example, we'll use 50000 as the port and as the key.

In real driver debugging scenarios, we recommend using a KDNET-generated key. For more information about how to use KDNET to generate a random key, see the Debug Drivers - Step by Step Lab (Sysvad Kernel Mode) topic.

On the host computer, open your solution in Visual Studio. You can double-click the solution file, KmdfHelloWorld.sln, in your KmdfHelloWorld folder.

In the Solution Explorer window, select and hold (or right-click) the KmdfHelloWorld project, and choose Properties .

In the KmdfHelloWorld Property Pages window, go to Configuration Properties > Driver Install > Deployment , as shown here.

Check Remove previous driver versions before deployment .

For Target Device Name , select the name of the computer that you configured for testing and debugging. In this exercise, we use a computer named MyTestComputer.

Select Hardware ID Driver Update , and enter the hardware ID for your driver. For this exercise, the hardware ID is Root\KmdfHelloWorld. Select OK .

Screen shot showing the kmdfhelloworld property pages window with the deployment driver install selected.

In this exercise, the hardware ID does not identify a real piece of hardware. It identifies an imaginary device that will be given a place in the device tree as a child of the root node. For real hardware, do not select Hardware ID Driver Update ; instead, select Install and Verify . You'll see the hardware ID in your driver's information (INF) file. In the Solution Explorer window, go to KmdfHelloWorld > Driver Files , and double-click KmdfHelloWorld.inf. The hardware ID is located under [Standard.NT$ARCH$].

On the Build menu, choose Deploy Solution . Visual Studio automatically copies the files required to install and run the driver to the target computer. Deployment may take a minute or two.

When you deploy a driver, the driver files are copied to the %Systemdrive%\drivertest\drivers folder on the test computer. If something goes wrong during deployment, you can check to see if the files are copied to the test computer. Verify that the .inf, .cat, test cert, and .sys files, and any other necessary files, are present in the %systemdrive%\drivertest\drivers folder.

For more information about deploying drivers, see Deploying a Driver to a Test Computer .

Install the driver

With your Hello World driver deployed to the target computer, now you'll install the driver. When you previously provisioned the target computer with Visual Studio using the automatic option, Visual Studio set up the target computer to run test signed drivers as part of the provisioning process. Now you just need to install the driver using the DevCon tool.

On the host computer, navigate to the Tools folder in your WDK installation and locate the DevCon tool. For example, look in the following folder:

C:\Program Files (x86)\Windows Kits\10\Tools\x64\devcon.exe

Copy the DevCon tool to your remote computer.

On the target computer, install the driver by navigating to the folder containing the driver files, then running the DevCon tool.

Here's the general syntax for the devcon tool that you'll use to install the driver:

devcon install <INF file> <hardware ID>

The INF file required for installing this driver is KmdfHelloWorld.inf. The INF file contains the hardware ID for installing the driver binary, KmdfHelloWorld.sys . Recall that the hardware ID, located in the INF file, is Root\KmdfHelloWorld .

Open a Command Prompt window as Administrator. Navigate to your folder containing the built driver .sys file and enter this command:

devcon install kmdfhelloworld.inf root\kmdfhelloworld

If you get an error message about devcon not being recognized, try adding the path to the devcon tool. For example, if you copied it to a folder on the target computer called C:\Tools , then try using the following command:

c:\tools\devcon install kmdfhelloworld.inf root\kmdfhelloworld

A dialog box will appear indicating that the test driver is an unsigned driver. Select Install this driver anyway to proceed.

Screenshot of the driver installation warning.

Debug the driver

Now that you've installed your KmdfHelloWorld driver on the target computer, you'll attach a debugger remotely from the host computer.

On the host computer, open a Command Prompt window as Administrator. Change to the WinDbg.exe directory. We'll use the x64version of WinDbg.exe from the Windows Driver Kit (WDK) that was installed as part of the Windows kit installation. Here's the default path to WinDbg.exe:

C:\Program Files (x86)\Windows Kits\10\Debuggers\x64

Launch WinDbg to connect to a kernel debug session on the target computer by using the following command. The value for the port and key should be the same as what you used to provision the target computer. We'll use 50000 for the port and for the key, the values we used during the deploy step. The k flag indicates that this is a kernel debug session.

WinDbg -k net:port=50000,key=

On the Debug menu, choose Break . The debugger on the host computer will break into the target computer. In the Debugger Command window, you can see the kernel debugging command prompt: kd> .

At this point, you can experiment with the debugger by entering commands at the kd> prompt. For example, you could try these commands:

To let the target computer run again, choose Go from the Debug menu or press "g," then press "enter."

To stop the debugging session, choose Detach Debuggee from the Debug menu.

Make sure you use the "go" command to let the target computer run again before exiting the debugger, or the target computer will remain unresponsive to your mouse and keyboard input because it is still talking to the debugger.

For a detailed step-by-step walkthrough of the driver debugging process, see Debug Universal Drivers - Step by Step Lab (Echo Kernel-Mode) .

For more information about remote debugging, see Remote Debugging Using WinDbg .

Related articles

Debugging Tools for Windows

Debug Universal Drivers - Step by Step Lab (Echo Kernel-Mode)

Write your first driver

Submit and view feedback for

Additional resources

How to write a Smart Rollup kernel

By Pierre-Louis Dubois

In this blog post, we will demonstrate how to create a Wasm kernel running on a Tezos Smart Optimistic Rollup. To do so, we are going to create a counter in Rust, compile it to WebAssembly (abbreviated Wasm), and simulate its execution.


To develop your own kernel you can choose any language you want that can compile to Wasm. A SDK is being developped in Rust by tezos core dev teams, so we will use Rust as the programming language. For installation of Rust, please read this document .

For Unix system, Rust can be installed as follow:

This blog post was tested with Rust 1.66

Create your project

Let’s initialize the project with cargo .

As you noticed, we are using the --lib option, because we don’t want to have the default main function used by Rust. Instead we will pass a function to a macro named kernel_entry .

The file Cargo.toml (aka “manifest”) contains the project’s configuration. Before starting your project you will need to update the lib section to allow compilation to Wasm. And you will also need to add the kernel library as a dependency. To do so you will have to update your Cargo.toml file as described below:

We won’t explain in this article how to write tests with the mock_runtim and the mock_host which allow you to mock different parts of your kernel. But we need to include these libraries to compile the kernel.

To compile your kernel to Wasm, you will need to add a new target to the Rust compiler. The wasm32-unknown-unknown target.

The project is now set up. You can build it with the following command:

The Wasm binary file will be located under the directory target/wasm32-unknown-unknown/release

Rust code lives in the src directory. The cargo init --lib has created for you a file src/lib.rs .

Hello Kernel

As a first step let’s write a hello world kernel. The goal of it is simple: print “Hello Kernel” every time the kernel is called.

We are importing two crates of the SDK, the host and the kernel one. The host crate aims to provide hosts function to the kernel as safe Rust. The kernel crate exposes a macro to run your kernel.

The main function of your kernel is the function given to the macro kernel_entry . The host argument allows you to communicate with the kernel. It gives you the ability to:

This function is called one time per Tezos block and will process the whole inbox.

Let me explain the different vocabulary used in kernel developement:

Looping over the inbox

Supposing our user has sent a message to the rollup, we need to process it. To do so, we have to loop over the inbox.

As explained earlier, the host argument gives you a way to read the input from the inbox with the following expression:

The size of a layer 1 message is 4096 bytes. The kernel library has defined a constant to represent this value: MAX_INPUT_MESSAGE_SIZE .

It may happen the function fails, in which case the error should be handled. In our case, to make it simple we won’t handle this error.

Then if it succeed, the function returns an optional. Indeed, it is possible that the inbox is empty and in this case there are no more messages to read.

Let’s write a recursive function to print “Hello message” for each input.

Do not forget to call your function:

The read messages are simple binary representation of the content sent by the user. To process them you will have to deserialize them from binary.

And that’s not all, in the inbox, there are more than messages from your user. The inbox is always populated with 3 messages: Start of Level, Info per Level, End of Level.

Thankfully it’s easy to differentiate the rollup messages from the user messages. The rollup messages start with the byte 0x00 and the user messages start with the byte 0x01.

Let’s ignore the messages from the rollup and get the appropriate bytes sent by our user:


Linux Kernel Development and Writing a Simple Kernel Module

This post is the first post in linux kernel series. Writing code to run in the kernel is different from user application. While developing in the kernel, you don’t write code from scratch, you need to implement one or more interfaces and register your implementation within the a kernel subsystem.

Kernel Interfaces

The kernel is written in C , so create an interface we use a structure with function pointers

Also, the subsystem provides a function(s) that accept that interface and register it as a new object in the kernel:

You need to implement some functions from the interface (usually not all) and create an object from the structure, initialize it with the functions and send it to the register_xxx function.

Simple Example – Real Time Clock

To add a new real time clock to the kernel, you need to implement the following interface (taken from rtc.h)

The minimum implementation is read_time and set_time, use the examples from the source (drivers/rtc) to find it.

To register a new Real time clock, call the function:

So to implement a new RTC in linux create 2 functions for read_time and set_time , declare a structure object and call rtc_device_register:

As you can see the task is simple, write some functions , implement them like everyone does (look at the source for many examples) , create an interface object and call the register function

When you write a kernel module it can be:

It doesn’t matter what you want to write, its always the same process: implement one or more interfaces and register it using functions provided by the sub system. Most of the times, you will find some documentation files in /Documentation folder and the key point to success is – USE THE SOURCE CODE –  you can find many working examples in the code – use it

Writing a Simple Module

As mentioned above, after implementing an interface, we need to register it with the system. To do that we need a code that runs on init.

The simplest module must declare 2 functions – on for init and one for exit. The module can be loaded with the kernel on startup (and unloaded on shutdown) or explicitly using insmod command (and rmmod for unload) – this is called a Loadable Kernel Module

The simplest module looks like this:

The module declares 2 functions

Both functions use printk – to write a message to the kernel log

To build the module we need the following Makefile :

On any linux distribution , you will find the kernel Makefile and headers in /lib/modules/[version]/build

Our Makefile calls the kernel one to build the module. run make in the directory and it will build simp.ko file

To load the module to the kernel use:

To see the kernel log use dmesg

To unload the module use rmmod command:

And again, you will see the output in the kernel log using dmesg command

printk function

printk write message to the kernel log.

The printk function is similar to stdlib’s printf(3) but No floating point format.

Log message are prefixed with a “<0>” , where the number denotes severity, from 0 (most severe) to 7.

Macros are defined to be used for severity levels: KERN_EMERG, KERN_ALERT, KERT_CRIT, KERN_ERR, KERN_WARNING, KERN_NOTICE, KERN_INFO, KERN_DEBUG

For example

is simply writing <1>hello to the kernel log

if you run dmesg on ubuntu you will see that the output is coloured different for the init and exit functions:

write a kernel

Thats because we logged the init message with ALERT and the cleanup with WARNING

You can control the display filter to the system console using the file /proc/sys/kernel/printk

21 thoughts on “ Linux Kernel Development and Writing a Simple Kernel Module ”

[…] the previous post , I covered the basics of kernel development with a simple example of loadable kernel module that […]

' src=

Good article! Readers might also be interested in this Github repository: https://github.com/cirosantilli/linux-kernel-module-cheat/tree/e8f09a76e6b40f61f4b445a40eb28eb4f36a7392/kernel_module which contains dosens of kernel module examples, userland tests, and a highly automated virtual machine setup.

[…] The first post we built a Simple Kernel Module with init and exit functions and covered the basic concepts in kernel […]

[…] The post assumes you know how to write and load a simple kernel module , if not start with this post […]

' src=

I think this is an informative post and knowledgeable. Thank you for sharing this wonderful post! I’m glad that I came across your article.

[…] Linux Kernel Development and Writing a Simple Kernel Module 3 by whack | 0 comments on Hacker News. […]

[…] Linux Kernel Development and Writing a Simple Kernel Module 4 by whack | 0 comments on Hacker News. […]

[…] Linux Kernel Development and Writing a Simple Kernel Module 7 by whack | 0 comments on Hacker News. […]

[…] Study Extra […]

[…] This post is the first post in linux kernel series. Writing code to run in the kernel is different from user application. While developing in the kernel, you don’t write code from scratch, you need to implement one or more interfaces and register your impleme… Read More […]

[…] For complete news please follow article link on whack […]

[…] https://devarea.com/linux-kernel-development-and-writing-a-simple-kernel-module/#.Xmpn5C2B3mE […]

[…] Read More […]

Comments are closed.

Javatpoint Logo


Help Others, Please Share


Learn Latest Tutorials

Splunk tutorial


Tumblr tutorial

Reinforcement Learning

R Programming tutorial

R Programming

RxJS tutorial

React Native

Python Design Patterns

Python Design Patterns

Python Pillow tutorial

Python Pillow

Python Turtle tutorial

Python Turtle

Keras tutorial



Verbal Ability

Interview Questions

Interview Questions

Company Interview Questions

Company Questions

Trending Technologies

Artificial Intelligence

Artificial Intelligence

AWS Tutorial

Cloud Computing

Hadoop tutorial

Data Science

Angular 7 Tutorial

Machine Learning

DevOps Tutorial

B.Tech / MCA

DBMS tutorial

Data Structures

DAA tutorial

Operating System

Computer Network tutorial

Computer Network

Compiler Design tutorial

Compiler Design

Computer Organization and Architecture

Computer Organization

Discrete Mathematics Tutorial

Discrete Mathematics

Ethical Hacking

Ethical Hacking

Computer Graphics Tutorial

Computer Graphics

Software Engineering

Software Engineering

html tutorial

Web Technology

Cyber Security tutorial

Cyber Security

Automata Tutorial

C Programming

C++ tutorial

Control System

Data Mining Tutorial

Data Mining

Data Warehouse Tutorial

Data Warehouse

Javatpoint Services

JavaTpoint offers too many high quality services. Mail us on [email protected] , to get more information about given services.

Training For College Campus

JavaTpoint offers college campus training on Core Java, Advance Java, .Net, Android, Hadoop, PHP, Web Technology and Python. Please mail your requirement at [email protected] Duration: 1 week to 2 week

RSS Feed

Writing an OS in Rust

Philipp Oppermann's blog

A Minimal Rust Kernel

In this post, we create a minimal 64-bit Rust kernel for the x86 architecture. We build upon the freestanding Rust binary from the previous post to create a bootable disk image that prints something to the screen.

This blog is openly developed on GitHub . If you have any problems or questions, please open an issue there. You can also leave comments at the bottom . The complete source code for this post can be found in the post-02 branch.

🔗 The Boot Process

When you turn on a computer, it begins executing firmware code that is stored in motherboard ROM . This code performs a power-on self-test , detects available RAM, and pre-initializes the CPU and hardware. Afterwards, it looks for a bootable disk and starts booting the operating system kernel.

On x86, there are two firmware standards: the “Basic Input/Output System“ ( BIOS ) and the newer “Unified Extensible Firmware Interface” ( UEFI ). The BIOS standard is old and outdated, but simple and well-supported on any x86 machine since the 1980s. UEFI, in contrast, is more modern and has much more features, but is more complex to set up (at least in my opinion).

Currently, we only provide BIOS support, but support for UEFI is planned, too. If you’d like to help us with this, check out the Github issue .

🔗 BIOS Boot

Almost all x86 systems have support for BIOS booting, including newer UEFI-based machines that use an emulated BIOS. This is great, because you can use the same boot logic across all machines from the last century. But this wide compatibility is at the same time the biggest disadvantage of BIOS booting, because it means that the CPU is put into a 16-bit compatibility mode called real mode before booting so that archaic bootloaders from the 1980s would still work.

But let’s start from the beginning:

When you turn on a computer, it loads the BIOS from some special flash memory located on the motherboard. The BIOS runs self-test and initialization routines of the hardware, then it looks for bootable disks. If it finds one, control is transferred to its bootloader , which is a 512-byte portion of executable code stored at the disk’s beginning. Most bootloaders are larger than 512 bytes, so bootloaders are commonly split into a small first stage, which fits into 512 bytes, and a second stage, which is subsequently loaded by the first stage.

The bootloader has to determine the location of the kernel image on the disk and load it into memory. It also needs to switch the CPU from the 16-bit real mode first to the 32-bit protected mode , and then to the 64-bit long mode , where 64-bit registers and the complete main memory are available. Its third job is to query certain information (such as a memory map) from the BIOS and pass it to the OS kernel.

Writing a bootloader is a bit cumbersome as it requires assembly language and a lot of non insightful steps like “write this magic value to this processor register”. Therefore, we don’t cover bootloader creation in this post and instead provide a tool named bootimage that automatically prepends a bootloader to your kernel.

If you are interested in building your own bootloader: Stay tuned, a set of posts on this topic is already planned!

🔗 The Multiboot Standard

To avoid that every operating system implements its own bootloader, which is only compatible with a single OS, the Free Software Foundation created an open bootloader standard called Multiboot in 1995. The standard defines an interface between the bootloader and the operating system, so that any Multiboot-compliant bootloader can load any Multiboot-compliant operating system. The reference implementation is GNU GRUB , which is the most popular bootloader for Linux systems.

To make a kernel Multiboot compliant, one just needs to insert a so-called Multiboot header at the beginning of the kernel file. This makes it very easy to boot an OS from GRUB. However, GRUB and the Multiboot standard have some problems too:

Because of these drawbacks, we decided to not use GRUB or the Multiboot standard. However, we plan to add Multiboot support to our bootimage tool, so that it’s possible to load your kernel on a GRUB system too. If you’re interested in writing a Multiboot compliant kernel, check out the first edition of this blog series.

(We don’t provide UEFI support at the moment, but we would love to! If you’d like to help, please tell us in the Github issue .)

🔗 A Minimal Kernel

Now that we roughly know how a computer boots, it’s time to create our own minimal kernel. Our goal is to create a disk image that prints a “Hello World!” to the screen when booted. We do this by extending the previous post’s freestanding Rust binary .

As you may remember, we built the freestanding binary through cargo , but depending on the operating system, we needed different entry point names and compile flags. That’s because cargo builds for the host system by default, i.e., the system you’re running on. This isn’t something we want for our kernel, because a kernel that runs on top of, e.g., Windows, does not make much sense. Instead, we want to compile for a clearly defined target system .

🔗 Installing Rust Nightly

Rust has three release channels: stable , beta , and nightly . The Rust Book explains the difference between these channels really well, so take a minute and check it out . For building an operating system, we will need some experimental features that are only available on the nightly channel, so we need to install a nightly version of Rust.

To manage Rust installations, I highly recommend rustup . It allows you to install nightly, beta, and stable compilers side-by-side and makes it easy to update them. With rustup, you can use a nightly compiler for the current directory by running rustup override set nightly . Alternatively, you can add a file called rust-toolchain with the content nightly to the project’s root directory. You can check that you have a nightly version installed by running rustc --version : The version number should contain -nightly at the end.

The nightly compiler allows us to opt-in to various experimental features by using so-called feature flags at the top of our file. For example, we could enable the experimental asm! macro for inline assembly by adding #![feature(asm)] to the top of our main.rs . Note that such experimental features are completely unstable, which means that future Rust versions might change or remove them without prior warning. For this reason, we will only use them if absolutely necessary.

🔗 Target Specification

Cargo supports different target systems through the --target parameter. The target is described by a so-called target triple , which describes the CPU architecture, the vendor, the operating system, and the ABI . For example, the x86_64-unknown-linux-gnu target triple describes a system with an x86_64 CPU, no clear vendor, and a Linux operating system with the GNU ABI. Rust supports many different target triples , including arm-linux-androideabi for Android or wasm32-unknown-unknown for WebAssembly .

For our target system, however, we require some special configuration parameters (e.g. no underlying OS), so none of the existing target triples fits. Fortunately, Rust allows us to define our own target through a JSON file. For example, a JSON file that describes the x86_64-unknown-linux-gnu target looks like this:

Most fields are required by LLVM to generate code for that platform. For example, the data-layout field defines the size of various integer, floating point, and pointer types. Then there are fields that Rust uses for conditional compilation, such as target-pointer-width . The third kind of field defines how the crate should be built. For example, the pre-link-args field specifies arguments passed to the linker .

We also target x86_64 systems with our kernel, so our target specification will look very similar to the one above. Let’s start by creating an x86_64-blog_os.json file (choose any name you like) with the common content:

Note that we changed the OS in the llvm-target and the os field to none , because we will run on bare metal.

We add the following build-related entries:

Instead of using the platform’s default linker (which might not support Linux targets), we use the cross-platform LLD linker that is shipped with Rust for linking our kernel.

This setting specifies that the target doesn’t support stack unwinding on panic, so instead the program should abort directly. This has the same effect as the panic = "abort" option in our Cargo.toml, so we can remove it from there. (Note that, in contrast to the Cargo.toml option, this target option also applies when we recompile the core library later in this post. So, even if you prefer to keep the Cargo.toml option, make sure to include this option.)

We’re writing a kernel, so we’ll need to handle interrupts at some point. To do that safely, we have to disable a certain stack pointer optimization called the “red zone” , because it would cause stack corruption otherwise. For more information, see our separate post about disabling the red zone .

The features field enables/disables target features. We disable the mmx and sse features by prefixing them with a minus and enable the soft-float feature by prefixing it with a plus. Note that there must be no spaces between different flags, otherwise LLVM fails to interpret the features string.

The mmx and sse features determine support for Single Instruction Multiple Data (SIMD) instructions, which can often speed up programs significantly. However, using the large SIMD registers in OS kernels leads to performance problems. The reason is that the kernel needs to restore all registers to their original state before continuing an interrupted program. This means that the kernel has to save the complete SIMD state to main memory on each system call or hardware interrupt. Since the SIMD state is very large (512–1600 bytes) and interrupts can occur very often, these additional save/restore operations considerably harm performance. To avoid this, we disable SIMD for our kernel (not for applications running on top!).

A problem with disabling SIMD is that floating point operations on x86_64 require SIMD registers by default. To solve this problem, we add the soft-float feature, which emulates all floating point operations through software functions based on normal integers.

For more information, see our post on disabling SIMD .

🔗 Putting it Together

Our target specification file now looks like this:

🔗 Building our Kernel

Compiling for our new target will use Linux conventions (I’m not quite sure why; I assume it’s just LLVM’s default). This means that we need an entry point named _start as described in the previous post :

Note that the entry point needs to be called _start regardless of your host OS.

We can now build the kernel for our new target by passing the name of the JSON file as --target :

It fails! The error tells us that the Rust compiler no longer finds the core library . This library contains basic Rust types such as Result , Option , and iterators, and is implicitly linked to all no_std crates.

The problem is that the core library is distributed together with the Rust compiler as a precompiled library. So it is only valid for supported host triples (e.g., x86_64-unknown-linux-gnu ) but not for our custom target. If we want to compile code for other targets, we need to recompile core for these targets first.

🔗 The build-std Option

That’s where the build-std feature of cargo comes in. It allows to recompile core and other standard library crates on demand, instead of using the precompiled versions shipped with the Rust installation. This feature is very new and still not finished, so it is marked as “unstable” and only available on nightly Rust compilers .

To use the feature, we need to create a cargo configuration file at .cargo/config.toml with the following content:

This tells cargo that it should recompile the core and compiler_builtins libraries. The latter is required because it is a dependency of core . In order to recompile these libraries, cargo needs access to the rust source code, which we can install with rustup component add rust-src .

Note: The unstable.build-std configuration key requires at least the Rust nightly from 2020-07-15.

After setting the unstable.build-std configuration key and installing the rust-src component, we can rerun our build command:

We see that cargo build now recompiles the core , rustc-std-workspace-core (a dependency of compiler_builtins ), and compiler_builtins libraries for our custom target.

🔗 Memory-Related Intrinsics

The Rust compiler assumes that a certain set of built-in functions is available for all systems. Most of these functions are provided by the compiler_builtins crate that we just recompiled. However, there are some memory-related functions in that crate that are not enabled by default because they are normally provided by the C library on the system. These functions include memset , which sets all bytes in a memory block to a given value, memcpy , which copies one memory block to another, and memcmp , which compares two memory blocks. While we didn’t need any of these functions to compile our kernel right now, they will be required as soon as we add some more code to it (e.g. when copying structs around).

Since we can’t link to the C library of the operating system, we need an alternative way to provide these functions to the compiler. One possible approach for this could be to implement our own memset etc. functions and apply the #[no_mangle] attribute to them (to avoid the automatic renaming during compilation). However, this is dangerous since the slightest mistake in the implementation of these functions could lead to undefined behavior. For example, implementing memcpy with a for loop may result in an infinite recursion because for loops implicitly call the IntoIterator::into_iter trait method, which may call memcpy again. So it’s a good idea to reuse existing, well-tested implementations instead.

Fortunately, the compiler_builtins crate already contains implementations for all the needed functions, they are just disabled by default to not collide with the implementations from the C library. We can enable them by setting cargo’s build-std-features flag to ["compiler-builtins-mem"] . Like the build-std flag, this flag can be either passed on the command line as a -Z flag or configured in the unstable table in the .cargo/config.toml file. Since we always want to build with this flag, the config file option makes more sense for us:

(Support for the compiler-builtins-mem feature was only added very recently , so you need at least Rust nightly 2020-09-30 for it.)

Behind the scenes, this flag enables the mem feature of the compiler_builtins crate. The effect of this is that the #[no_mangle] attribute is applied to the memcpy etc. implementations of the crate, which makes them available to the linker.

With this change, our kernel has valid implementations for all compiler-required functions, so it will continue to compile even if our code gets more complex.

🔗 Set a Default Target

To avoid passing the --target parameter on every invocation of cargo build , we can override the default target. To do this, we add the following to our cargo configuration file at .cargo/config.toml :

This tells cargo to use our x86_64-blog_os.json target when no explicit --target argument is passed. This means that we can now build our kernel with a simple cargo build . For more information on cargo configuration options, check out the official documentation .

We are now able to build our kernel for a bare metal target with a simple cargo build . However, our _start entry point, which will be called by the boot loader, is still empty. It’s time that we output something to screen from it.

🔗 Printing to Screen

The easiest way to print text to the screen at this stage is the VGA text buffer . It is a special memory area mapped to the VGA hardware that contains the contents displayed on screen. It normally consists of 25 lines that each contain 80 character cells. Each character cell displays an ASCII character with some foreground and background colors. The screen output looks like this:

screen output for common ASCII characters

We will discuss the exact layout of the VGA buffer in the next post, where we write a first small driver for it. For printing “Hello World!”, we just need to know that the buffer is located at address 0xb8000 and that each character cell consists of an ASCII byte and a color byte.

The implementation looks like this:

First, we cast the integer 0xb8000 into a raw pointer . Then we iterate over the bytes of the static HELLO byte string . We use the enumerate method to additionally get a running variable i . In the body of the for loop, we use the offset method to write the string byte and the corresponding color byte ( 0xb is a light cyan).

Note that there’s an unsafe block around all memory writes. The reason is that the Rust compiler can’t prove that the raw pointers we create are valid. They could point anywhere and lead to data corruption. By putting them into an unsafe block, we’re basically telling the compiler that we are absolutely sure that the operations are valid. Note that an unsafe block does not turn off Rust’s safety checks. It only allows you to do five additional things .

I want to emphasize that this is not the way we want to do things in Rust! It’s very easy to mess up when working with raw pointers inside unsafe blocks. For example, we could easily write beyond the buffer’s end if we’re not careful.

So we want to minimize the use of unsafe as much as possible. Rust gives us the ability to do this by creating safe abstractions. For example, we could create a VGA buffer type that encapsulates all unsafety and ensures that it is impossible to do anything wrong from the outside. This way, we would only need minimal amounts of unsafe code and can be sure that we don’t violate memory safety . We will create such a safe VGA buffer abstraction in the next post.

🔗 Running our Kernel

Now that we have an executable that does something perceptible, it is time to run it. First, we need to turn our compiled kernel into a bootable disk image by linking it with a bootloader. Then we can run the disk image in the QEMU virtual machine or boot it on real hardware using a USB stick.

🔗 Creating a Bootimage

To turn our compiled kernel into a bootable disk image, we need to link it with a bootloader. As we learned in the section about booting , the bootloader is responsible for initializing the CPU and loading our kernel.

Instead of writing our own bootloader, which is a project on its own, we use the bootloader crate. This crate implements a basic BIOS bootloader without any C dependencies, just Rust and inline assembly. To use it for booting our kernel, we need to add a dependency on it:

Adding the bootloader as a dependency is not enough to actually create a bootable disk image. The problem is that we need to link our kernel with the bootloader after compilation, but cargo has no support for post-build scripts .

To solve this problem, we created a tool named bootimage that first compiles the kernel and bootloader, and then links them together to create a bootable disk image. To install the tool, execute the following command in your terminal:

For running bootimage and building the bootloader, you need to have the llvm-tools-preview rustup component installed. You can do so by executing rustup component add llvm-tools-preview .

After installing bootimage and adding the llvm-tools-preview component, we can create a bootable disk image by executing:

We see that the tool recompiles our kernel using cargo build , so it will automatically pick up any changes you make. Afterwards, it compiles the bootloader, which might take a while. Like all crate dependencies, it is only built once and then cached, so subsequent builds will be much faster. Finally, bootimage combines the bootloader and your kernel into a bootable disk image.

After executing the command, you should see a bootable disk image named bootimage-blog_os.bin in your target/x86_64-blog_os/debug directory. You can boot it in a virtual machine or copy it to a USB drive to boot it on real hardware. (Note that this is not a CD image, which has a different format, so burning it to a CD doesn’t work).

🔗 How does it work?

The bootimage tool performs the following steps behind the scenes:

When booted, the bootloader reads and parses the appended ELF file. It then maps the program segments to virtual addresses in the page tables, zeroes the .bss section, and sets up a stack. Finally, it reads the entry point address (our _start function) and jumps to it.

🔗 Booting it in QEMU

We can now boot the disk image in a virtual machine. To boot it in QEMU , execute the following command:

This opens a separate window which should look similar to this:

QEMU showing “Hello World!”

We see that our “Hello World!” is visible on the screen.

🔗 Real Machine

It is also possible to write it to a USB stick and boot it on a real machine, but be careful to choose the correct device name, because everything on that device is overwritten :

Where sdX is the device name of your USB stick.

After writing the image to the USB stick, you can run it on real hardware by booting from it. You probably need to use a special boot menu or change the boot order in your BIOS configuration to boot from the USB stick. Note that it currently doesn’t work for UEFI machines, since the bootloader crate has no UEFI support yet.

🔗 Using cargo run

To make it easier to run our kernel in QEMU, we can set the runner configuration key for cargo:

The target.'cfg(target_os = "none")' table applies to all targets whose target configuration file’s "os" field is set to "none" . This includes our x86_64-blog_os.json target. The runner key specifies the command that should be invoked for cargo run . The command is run after a successful build with the executable path passed as the first argument. See the cargo documentation for more details.

The bootimage runner command is specifically designed to be usable as a runner executable. It links the given executable with the project’s bootloader dependency and then launches QEMU. See the Readme of bootimage for more details and possible configuration options.

Now we can use cargo run to compile our kernel and boot it in QEMU.

🔗 What’s next?

In the next post, we will explore the VGA text buffer in more detail and write a safe interface for it. We will also add support for the println macro.

Creating and maintaining this blog and the associated libraries is a lot of work, but I really enjoy doing it. By supporting me, you allow me to invest more time in new content, new features, and continuous maintenance.

The best way to support me is to sponsor me on GitHub , since they don't charge any fees. If you prefer other platforms, I also have Patreon and Donorbox accounts. The latter is the most flexible as it supports multiple currencies and one-time contributions.

Do you have a problem, want to share feedback, or discuss further ideas? Feel free to leave a comment here! Please stick to English and follow Rust's code of conduct . This comment thread directly maps to a discussion on GitHub , so you can also comment there if you prefer.

Instead of authenticating the giscus application, you can also comment directly on GitHub .


  1. File:Write kernel v2 2.jpg

    write a kernel

  2. Linux kernel maintainers tear Paragon a new one after firm submits read-write NTFS driver in

    write a kernel

  3. U Me N CS: Programming A Simple Kernel Program / Module

    write a kernel

  4. Solved: 3. Kernels (xTq1). This Is Often Known As A Polyno...

    write a kernel

  5. Kernel Read/Write Access Achieved on iOS 11.3

    write a kernel

  6. Kernel 101

    write a kernel


  1. To the kernel and beyond

  2. Introduction to Machine Learning -Lecture 38 (Kernel Method)

  3. 04} What Is Kernel?

  4. 10 Basic Concepts of Kernel Method

  5. S1 Linux Kernel Intro,Design Approach

  6. Kernel Data structure


  1. What Does It Take to Make a Kernel?

    The Kernel's Main Function So, you wrote your boot code, and your boot code knows that there is an external main function it needs to load into, but you don't have an external main function—at least, not yet. Create a file in the same working directory, and name it kernel.c. The file's contents should be the following: kernel.c

  2. Kernels 101

    Building the kernel We will now create object files from kernel.asm and kernel.c and then link it using our linker script. nasm -f elf32 kernel.asm -o kasm.o will run the assembler to create the object file kasm.o in ELF-32 bit format. gcc -m32 -c kernel.c -o kc.o The '-c ' option makes sure that after compiling, linking doesn't implicitly happen.

  3. Read/write files within a Linux kernel module

    Writing data to a file (similar to pwrite): int file_write (struct file *file, unsigned long long offset, unsigned char *data, unsigned int size) { mm_segment_t oldfs; int ret; oldfs = get_fs (); set_fs (get_ds ()); ret = vfs_write (file, data, size, &offset); set_fs (oldfs); return ret; } Syncing changes a file (similar to fsync):

  4. Creating a 64-bit kernel

    The technique for creating a 64 bit kernel with a 32 bit bootstrap is similar to GCC. You need to create an assembly bootstrap with nasm (masm may work, but the author uses nasm). Note that this stub must be assembled to a 64 bit object file (-f win64). Your stub then has a BITS 32 directive.

  5. Create Your Own Kernel In C

    We have printing code, keyboard I/O handling and GUI using box drawing characters.So lets write a simple Tic-Tac-Toe game in kernel that can be run on any PC. Download the kernel_source code, kernel_source/Tic-Tac-Toe. How to Play : Use arrow keys (UP, DOWN, LEFT, RIGHT) to move white box between cells and press SPACEBAR to select that cell.

  6. How to write your first Linux Kernel Module

    How to write your first Linux Kernel Module | by Ruan de Bruyn | DVT Software Engineering | Medium Sign up 500 Apologies, but something went wrong on our end. Refresh the page, check Medium 's...

  7. What is a kernel? The kernel's role in the operating system

    A kernel is always built the same way and consists of several layers: The deepest layer is the interface with hardware (processors, memory, and devices), which manages network controllers and PCI express controllers, for example. On top of that is the memory management, which entails distributing RAM including the virtual main memory.

  8. Create Your Own Kernel

    You already know what kernel is. The first part of writing an operating system is to write a bootloader in 16-bit assembly (real mode). A bootloader is a piece of program that runs on any operating system is running. It is used to boot other operating systems. Usually, each operating system has a set of bootloaders specific for it.

  9. Kernel 101

    It's a nice short introduction to writing bare-metal x86 programs. Basically, a x86 kernel is a C program that does not call any standard library (that's why I'd use option -ffreestanding -nostdlib during compilation).. Also this program offloads a lot of work (parsing ELF files, switching to protected mode, setting up initial memory layout and the stack pointer, %esp) to Grub and Multiboot ...

  10. Writing a 16-bit dummy kernel in C/C++

    I will write a program called kernel.c in C Language, making sure that all the extra functionality that I desired to is properly written in it. Compile and save the executable as kernel.bin Now, copy the kernel.bin file to the bootable drive into second sector. Part 2:

  11. Write Your Own 64-bit Operating System Kernel #1

    In this series, we'll write our own 64-bit x86 operating system kernel from scratch, which will be multiboot2-compliant. In future episodes we might expand out to other architectures and...

  12. What is a Kernel? Types of Kernels

    kernel: The kernel is the essential center of a computer operating system , the core that provides basic services for all other parts of the operating system. A synonym is nucleus . A kernel can be contrasted with a shell , the outermost part of an operating system that interacts with user commands. Kernel and shell are terms used more ...

  13. Kernel in Operating System

    Kernel is central component of an operating system that manages operations of computer and hardware. It basically manages operations of memory and CPU time. It is core component of an operating system. Kernel acts as a bridge between applications and data processing performed at hardware level using inter-process communication and system calls.

  14. Writing a Linux Kernel Module

    In this series of articles I describe how you can write a Linux kernel module for an embedded Linux device. I begin with a straightforward "Hello World!" loadable kernel module (LKM) and work towards developing a module that can control GPIOs on an embedded Linux device (such as the BeagleBone) through the use of IRQs.

  15. How do I write a Loadable kernel module for blocking USB ports?

    USB driver : Write a basic char module which is loaded on USB controller. Have basic file operations on this driver. Whenever a request comes from the filesystem, fail all the requests (e.g : Open, Create, Write, Read, IOCTL) and voila, you have restricted access to all the USB devices.

  16. kernel_write identifier

    Elixir Cross Referencer - Explore source code in your browser - Particularly useful for the Linux kernel and other low-level projects in C/C++ (bootloaders, C libraries...) Linux preempt-rt Check our new training course

  17. Write a Hello World Windows Driver (KMDF)

    This article describes how to write a small Universal Windows driver using Kernel-Mode Driver Framework (KMDF) and then deploy and install your driver on a separate computer. To get started, be sure you have Microsoft Visual Studio, the Windows SDK, and the Windows Driver Kit (WDK) installed.

  18. How to write a Smart Rollup kernel

    As a first step let's write a hello world kernel. The goal of it is simple: print "Hello Kernel" every time the kernel is called. // src/lib.rs use host:: {rollup_core::RawRollupCore, runtime::Runtime}; use kernel::kernel_entry; fn entry<Host: RawRollupCore> (host: &mut Host) { host.write_debug ("Hello Kernel\n"); } kernel_entry! (entry);

  19. Linux Kernel Development and Writing a Simple Kernel Module

    On any linux distribution , you will find the kernel Makefile and headers in /lib/modules/ [version]/build. Our Makefile calls the kernel one to build the module. run make in the directory and it will build simp.ko file. To load the module to the kernel use: # sudo insmod ./simp.ko. To see the kernel log use dmesg.

  20. What is Kernel in Operating System (OS)?

    In computer science, Kernel is a computer program that is a core or heart of an operating system. Before discussing kernel in detail, let's first understand its basic, i.e., Operating system in a computer. Operating System An operating system or OS is system software that works as an interface between hardware components and end-user.

  21. A Minimal Rust Kernel

    Writing a bootloader is a bit cumbersome as it requires assembly language and a lot of non insightful steps like "write this magic value to this processor register". Therefore, we don't cover bootloader creation in this post and instead provide a tool named bootimage that automatically prepends a bootloader to your kernel.

  22. FAQ/WhyWritingFilesFromKernelIsBad

    is often asked on the kernelnewbies mailing list. However, the question cannot really be answered: opening, reading and writing files from within the kernel is usually a bad idea. Generally speaking, trying to use any of the sys_* () functions from the kernel itself is a bad idea. Selecting where and in what format to read/write data is a ...

  23. [PATCH] generic/639: Test page faults during read and write

    From: Andreas Gruenbacher <[email protected]> To: [email protected] Cc: Andreas Gruenbacher <[email protected]> Subject: [PATCH] generic/639: Test page faults during read and write Date: Mon, 31 May 2021 17:26:04 +0200 [thread overview] Message-ID: <[email protected]> () Some filesystems have problems when the buffer passed to read or write is memory-mapped ...