diff options
Diffstat (limited to 'content/gemtext')
34 files changed, 6 insertions, 5798 deletions
diff --git a/content/gemtext b/content/gemtext new file mode 160000 +Subproject faa2cd392016af9b76a4f40eeb12003785ced15 diff --git a/content/gemtext/contact-information.gmi b/content/gemtext/contact-information.gmi deleted file mode 100644 index c33615e4..00000000 --- a/content/gemtext/contact-information.gmi +++ /dev/null @@ -1,38 +0,0 @@ -# Contact information - -## E-Mail - -* Secure E-Mail: paul.buetow at protonmail dot com -* E-Mail: paul at buetow dot org (forwards to the ProtonMail address) - -Why did I just mention 2 E-Mail addresses here? The buetow.org address will always stay. It is my lifetime E-Mail address as I own the domain name. The address will stay even when I decided to change my email provider. - -Use the ProtonMail address if you care about security for now. The address stays valid as long as I am ProtonMail user. Especially if you are ProtonMail user too we could have real E-Mail end-2-end encryption for our conversation. - -## Quick Links - -### Social Media - -I am sharing articles which I found interesting regularly on all the social media channels. To get you navigated quickly, here are the links: - -=> https://www.linkedin.com/in/paul-buetow-b4857270/ My LinkedIn profile -=> https://twitter.com/snonux My Twitter profile -=> https://t.me/snonux My Telegram channel - -### My Open Source code repositories - -=> https://github.com/snonux My personal GitHub page -=> https://github.com/mimecast/dtail DTail at Mimecast -=> https://github.com/mimecast/ioriot I/O Riot at Mimecast - -### My old personal website - -It's still there for fun + profit. - -=> http://paul.buetow.org - -It's powered by Xerl, my own CMS: - -=> http://xerl.buetow.org - -=> ./ Go back to the main site diff --git a/content/gemtext/favicon.ico b/content/gemtext/favicon.ico Binary files differdeleted file mode 100644 index 999c023e..00000000 --- a/content/gemtext/favicon.ico +++ /dev/null diff --git a/content/gemtext/gemfeed/2008-06-26-perl-poetry.gmi b/content/gemtext/gemfeed/2008-06-26-perl-poetry.gmi deleted file mode 100644 index 0414a23f..00000000 --- a/content/gemtext/gemfeed/2008-06-26-perl-poetry.gmi +++ /dev/null @@ -1,166 +0,0 @@ -# Perl Poetry - -``` - '\|/' * --- * ----- - /|\ ____ - ' | ' {_ o^> * - : -_ /) - : ( ( .-''`'. - . \ \ / \ - . \ \ / \ - \ `-' `'. - \ . ' / `. - \ ( \ ) ( .') - ,, t '. | / | ( - '|``_/^\___ '| |`'-..-'| ( () -_~~|~/_|_|__/|~~~~~~~ | / ~~~~~ | | ~~~~~~~~ - -_ |L[|]L|/ | |\ MJP ) ) - ( |( / /| - ~~ ~ ~ ~~~~ | /\\ / /| | - || \\ _/ / | | - ~ ~ ~~~ _|| (_/ (___)_| |Nov291999 - (__) (____) -``` - -> Written by Paul Buetow 2008-06-26, last updated 2021-05-04 - -Here are some Perl Poems I wrote. They don't do anything useful when you run them, but they don't produce a compiler error either. They only exist for fun and demonstrate what you can do with Perl syntax. - -Wikipedia: "Perl poetry is the practice of writing poems that can be compiled as legal Perl code, for example the piece known as Black Perl. Perl poetry is made possible by the large number of English words that are used in the Perl language. New poems are regularly submitted to the community at PerlMonks." - -=> https://en.wikipedia.org/wiki/Perl - -## math.pl - -``` -#!/usr/bin/perl - -# (C) 2006 by Paul C. Buetow (http://paul.buetow.org) - -goto library for study $math; -BEGIN { s/earching/ books/ -and read $them, $at, $the } library: - -our $topics, cos and tan, -require strict; import { of, tied $patience }; - -do { int'egrate'; sub trade; }; -do { exp'onentize' and abs'olutize' }; -study and study and study and study; - -foreach $topic ({of, math}) { -you, m/ay /go, to, limits } - -do { not qw/erk / unless $success -and m/ove /o;$n and study }; - -do { int'egrate'; sub trade; }; -do { exp'onentize' and abs'olutize' }; -study and study and study and study; - -grep /all/, exp'onents' and cos'inuses'; -/seek results/ for @all, log'4rithms'; - -'you' =~ m/ay /go, not home -unless each %book ne#ars -$completion; - -do { int'egrate'; sub trade; }; -do { exp'onentize' and abs'olutize' }; - -#at -home: //ig,'nore', time and sleep $very =~ s/tr/on/g; -__END__ - -``` - -## christmas.pl - -``` -#!/usr/bin/perl - -# (C) 2006 by Paul C. Buetow (http://paul.buetow.org) - -Christmas:{time;#!!! - -Children: do tell $wishes; - -Santa: for $each (@children) { -BEGIN { read $each, $their, wishes and study them; use Memoize#ing - -} use constant gift, 'wrapping'; -package Gifts; pack $each, gift and bless $each and goto deliver -or do import if not local $available,!!! HO, HO, HO; - -redo Santa, pipe $gifts, to_childs; -redo Santa and do return if last one, is, delivered; - -deliver: gift and require diagnostics if our $gifts ,not break; -do{ use NEXT; time; tied $gifts} if broken and dump the, broken, ones; -The_children: sleep and wait for (each %gift) and try { to => untie $gifts }; - -redo Santa, pipe $gifts, to_childs; -redo Santa and do return if last one, is, delivered; - -The_christmas_tree: formline s/ /childrens/, $gifts; -alarm and warn if not exists $Christmas{ tree}, @t, $ENV{HOME}; -write <<EMail - to the parents to buy a new christmas tree!!!!111 - and send the -EMail -;wait and redo deliver until defined local $tree; - -redo Santa, pipe $gifts, to_childs; -redo Santa and do return if last one, is, delivered ;} - -END {} our $mission and do sleep until next Christmas ;} - -__END__ - -This is perl, v5.8.8 built for i386-freebsd-64int -``` - -## shopping.pl - -``` -#!/usr/bin/perl - -# (C) 2007 by Paul C. Buetow (http://paul.buetow.org) - -BEGIN{} goto mall for $shopping; - -m/y/; mall: seek$s, cool products(), { to => $sell }; -for $their (@business) { to:; earn:; a:; lot:; of:; money: } - -do not goto home and exit mall if exists $new{product}; -foreach $of (q(uality rich products)){} package products; - -our $news; do tell cool products() and do{ sub#tract -cool{ $products and shift @the, @bad, @ones; - -do bless [q(uality)], $products -and return not undef $stuff if not (local $available) }}; - -do { study and study and study for cool products() } -and do { seek $all, cool products(), { to => $buy } }; - -do { write $them, $down } and do { order: foreach (@case) { package s } }; -goto home if not exists $more{money} or die q(uerying) ;for( @money){}; - -at:;home: do { END{} and:; rest:; a:; bit: exit $shopping } -and sleep until unpack$ing, cool products(); - -__END__ -This is perl, v5.8.8 built for i386-freebsd-64int -``` - -## More... - -Did you like what you saw? Have a look at Github to see my other poems too: - -=> https://github.com/snonux/perl-poetry - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2010-04-09-standard-ml-and-haskell.gmi b/content/gemtext/gemfeed/2010-04-09-standard-ml-and-haskell.gmi deleted file mode 100644 index a4a1dc57..00000000 --- a/content/gemtext/gemfeed/2010-04-09-standard-ml-and-haskell.gmi +++ /dev/null @@ -1,174 +0,0 @@ -# Standard ML and Haskell - -> Written by Paul Buetow 2010-04-09 - -I am currently looking into the functional programming language Standard ML (aka SML). The purpose is to refresh my functional programming skills and to learn something new too. Since I already know a little Haskell, could I do not help myself and I implemented the same exercises in Haskell too. - -As you will see, SML and Haskell are very similar (at least when it comes to the basics). However, the syntax of Haskell is a bit more "advanced". Haskell utilizes fewer keywords (e.g. no val, end, fun, fn ...). Haskell also allows to explicitly write down the function types. What I have been missing in SML so far is the so-called pattern guards. Although this is a very superficial comparison for now, so far I like Haskell more than SML. Nevertheless, I thought it would be fun to demonstrate a few simple functions of both languages to show off the similarities. - -Haskell is also a "pure functional" programming language, whereas SML also makes explicit use of imperative concepts. I am by far not a specialist in either of these languages but here are a few functions implemented in both, SML and Haskell: - -## Defining a multi data type - -Standard ML: - -``` -datatype ’a multi - = EMPTY - | ELEM of ’a - | UNION of ’a multi * ’a multi -``` - -Haskell: - -``` -data (Eq a) => Multi a - = Empty - | Elem a - | Union (Multi a) (Multi a) - deriving Show -``` - -## Processing a multi - -Standard ML: - -``` -fun number (EMPTY) _ = 0 - | number (ELEM x) w = if x = w then 1 else 0 - | number (UNION (x,y)) w = (number x w) + (number y w) -fun test_number w = number (UNION (EMPTY, \ - UNION (ELEM 4, UNION (ELEM 6, \ - UNION (UNION (ELEM 4, ELEM 4), EMPTY))))) w -``` - -Haskell: - -``` -number Empty _ = 0 -number (Elem x) w = if x == w then 1 else 0 -test_number w = number (Union Empty \ - (Union (Elem 4) (Union (Elem 6) \ - (Union (Union (Elem 4) (Elem 4)) Empty)))) w -``` - -## Simplify function - -Standard ML: - -``` -fun simplify (UNION (x,y)) = - let fun is_empty (EMPTY) = true | is_empty _ = false - val x’ = simplify x - val y’ = simplify y - in if (is_empty x’) andalso (is_empty y’) - then EMPTY - else if (is_empty x’) - then y’ - else if (is_empty y’) - then x’ - else UNION (x’, y’) - end - | simplify x = x -``` - -Haskell: - -``` -simplify (Union x y) - | (isEmpty x’) && (isEmpty y’) = Empty - | isEmpty x’ = y’ - | isEmpty y’ = x’ - | otherwise = Union x’ y’ - where - isEmpty Empty = True - isEmpty _ = False - x’ = simplify x - y’ = simplify y -simplify x = x -``` - -## Delete all - -Standard ML: - -``` -fun delete_all m w = - let fun delete_all’ (ELEM x) = if x = w then EMPTY else ELEM x - | delete_all’ (UNION (x,y)) = UNION (delete_all’ x, delete_all’ y) - | delete_all’ x = x - in simplify (delete_all’ m) - end -``` - -Haskell: - -``` -delete_all m w = simplify (delete_all’ m) - where - delete_all’ (Elem x) = if x == w then Empty else Elem x - delete_all’ (Union x y) = Union (delete_all’ x) (delete_all’ y) - delete_all’ x = x -``` - -## Delete one - -Standard ML: - -``` -fun delete_one m w = - let fun delete_one’ (UNION (x,y)) = - let val (x’, deleted) = delete_one’ x - in if deleted - then (UNION (x’, y), deleted) - else let val (y’, deleted) = delete_one’ y - in (UNION (x, y’), deleted) - end - end - | delete_one’ (ELEM x) = - if x = w then (EMPTY, true) else (ELEM x, false) - | delete_one’ x = (x, false) - val (m’, _) = delete_one’ m - in simplify m’ - end -``` - -Haskell: - -``` -delete_one m w = do - let (m’, _) = delete_one’ m - simplify m’ - where - delete_one’ (Union x y) = - let (x’, deleted) = delete_one’ x - in if deleted - then (Union x’ y, deleted) - else let (y’, deleted) = delete_one’ y - in (Union x y’, deleted) - delete_one’ (Elem x) = - if x == w then (Empty, True) else (Elem x, False) - delete_one’ x = (x, False) -``` - -## Higher order functions - -The first line is always the SML code, the second line always the Haskell variant: - -``` -fun make_map_fn f1 = fn (x,y) => f1 x :: y -make_map_fn f1 = \x y -> f1 x : y - -fun make_filter_fn f1 = fn (x,y) => if f1 x then x :: y else y -make_filter_fn f1 = \x y -> if f1 then x : y else y - -fun my_map f l = foldr (make_map_fn f) [] l -my_map f l = foldr (make_map_fn f) [] l - -fun my_filter f l = foldr (make_filter_fn f) [] l -my_filter f l = foldr (make_filter_fn f) [] l -``` - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2010-05-09-the-fype-programming-language.gmi b/content/gemtext/gemfeed/2010-05-09-the-fype-programming-language.gmi deleted file mode 100644 index dc28ef6c..00000000 --- a/content/gemtext/gemfeed/2010-05-09-the-fype-programming-language.gmi +++ /dev/null @@ -1,510 +0,0 @@ -# The Fype Programming Language - -``` - ____ _ __ - / / _|_ _ _ __ ___ _ _ ___ __ _| |__ / _|_ _ - / / |_| | | | '_ \ / _ \ | | | |/ _ \/ _` | '_ \ | |_| | | | - _ / /| _| |_| | |_) | __/ | |_| | __/ (_| | | | |_| _| |_| | -(_)_/ |_| \__, | .__/ \___| \__, |\___|\__,_|_| |_(_)_| \__, | - |___/|_| |___/ |___/ -``` - -> Written by Paul Buetow 2010-05-09, last updated 2021-05-05 - -Fype is an interpreted programming language created by me for learning and fun. The interpreter is written in C. It has been tested on FreeBSD and NetBSD and may also work on other Unix like operating systems such as Linux based ones. To be honest, besides learning and fun there is really no other use case of why Fype actually exists as many other programming languages are much faster and more powerful. - -The Fype syntax is very simple and is using a maximum look ahead of 1 and a very easy top down parsing mechanism. Fype is parsing and interpreting its code simultaneously. This means, that syntax errors are only detected during program runtime. - -Fype is a recursive acronym and means "Fype is For Your Program Execution" or "Fype is Free Yak Programmed for ELF". You could also say "It's not a hype - it's Fype!". - -## Object oriented C style - -The Fype interpreter is written in an object oriented style of C. Each "main component" has its own .h and .c file. There is a struct type for each (most components at least) component which can be initialized using a "COMPONENT_new" function and destroyed using a "COMPONENT_delete" function. Method calls follow the same schema, e.g. "COMPONENT_METHODNAME". There is no such as class inheritance and polymorphism involved. - -To give you an idea how it works here as an example is a snippet from the main Fype "class header": - -``` -typedef struct { - Tupel *p_tupel_argv; // Contains command line options - List *p_list_token; // Initial list of token - Hash *p_hash_syms; // Symbol table - char *c_basename; -} Fype; -``` - -And here is a snippet from the main Fype "class implementation": - -``` -Fype* -fype_new() { - Fype *p_fype = malloc(sizeof(Fype)); - - p_fype->p_hash_syms = hash_new(512); - p_fype->p_list_token = list_new(); - p_fype->p_tupel_argv = tupel_new(); - p_fype->c_basename = NULL; - - garbage_init(); - - return (p_fype); -} - -void -fype_delete(Fype *p_fype) { - argv_tupel_delete(p_fype->p_tupel_argv); - - hash_iterate(p_fype->p_hash_syms, symbol_cleanup_hash_syms_cb); - hash_delete(p_fype->p_hash_syms); - - list_iterate(p_fype->p_list_token, token_ref_down_cb); - list_delete(p_fype->p_list_token); - - if (p_fype->c_basename) - free(p_fype->c_basename); - - garbage_destroy(); -} - -int -fype_run(int i_argc, char **pc_argv) { - Fype *p_fype = fype_new(); - - // argv: Maintains command line options - argv_run(p_fype, i_argc, pc_argv); - - // scanner: Creates a list of token - scanner_run(p_fype); - - // interpret: Interpret the list of token - interpret_run(p_fype); - - fype_delete(p_fype); - - return (0); -} -``` - -## Data types - -Fype uses auto type conversion. However, if you want to know what's going on you may take a look at the following basic data types: -* integer - Specifies a number -* double - Specifies a double precision number -* string - Specifies a string -* number - May be an integer or a double number -* any- May be any type above -* void - No type -* identifier - It's a variable name or a procedure name or a function name - -There is no boolean type, but we can use the integer values 0 for false and 1 for true. There is support for explicit type casting too. - -## Syntax - -### Comments - -Text from a # character until the end of the current line is considered being a comment. Multi line comments may start with an #* and with a *# anywhere. Exceptions are if those signs are inside of strings. - -### Variables - -Variables can be defined with the "my" keyword (inspired by Perl :-). If you don't assign a value during declaration, then it's using the default integer value 0. Variables may be changed during program runtime. Variables may be deleted using the "undef" keyword! Example: - -``` -my foo = 1 + 2; -say foo; - -my bar = 12, baz = foo; -say 1 + bar; -say bar; - -my baz; -say baz; # Will print out 0 -``` - -You may use the "defined" keyword to check if an identifier has been defined or not: - -``` -ifnot defined foo { - say "No foo yet defined"; -} - -my foo = 1; - -if defined foo { - put "foo is defined and has the value "; - say foo; -} -``` - -### Synonyms - -Each variable can have as many synonyms as wished. A synonym is another name to access the content of a specific variable. Here is an example of how to use is: - -``` -my foo = "foo"; -my bar = \foo; -foo = "bar"; - -# The synonym variable should now also set to "bar" -assert "bar" == bar; -``` - -Synonyms can be used for all kind of identifiers. It's not limited to normal variables but can be also used for function and procedure names etc (more about functions and procedures later). - -``` -# Create a new procedure baz -proc baz { say "I am baz"; } - -# Make a synonym baz, and undefine baz -my bay = \baz; - -undef baz; - -# bay still has a reference of the original procedure baz -bay; # this prints aut "I am baz" -``` - -The "syms" keyword gives you the total number of synonyms pointing to a specific value: - -``` -my foo = 1; -say syms foo; # Prints 1 - -my baz = \foo; -say syms foo; # Prints 2 -say syms baz; # Prints 2 - -undef baz; -say syms foo; # Prints 1 -``` - -## Statements and expressions - -A Fype program is a list of statements. Each keyword, expression or function call is part of a statement. Each statement is ended with a semicolon. Example: - -``` -my bar = 3, foo = 1 + 2; -say foo; -exit foo - bar; -``` - -### Parenthesis - -All parenthesis for function arguments are optional. They help to make the code better readable. They also help to force precedence of expressions. - -### Basic expressions - -Any "any" value holding a string will be automatically converted to an integer value. - -``` -(any) <any> + <any> -(any) <any> - <any> -(any) <any> * <any> -(any) <any> / <any> -(integer) <any> == <any> -(integer) <any> != <any> -(integer) <any> <= <any> -(integer) <any> gt <any> -(integer) <any> <> <any> -(integer) <any> gt <any> -(integer) not <any> -``` - -### Bitwise expressions - -``` -(integer) <any> :< <any> -(integer) <any> :> <any> -(integer) <any> and <any> -(integer) <any> or <any> -(integer) <any> xor <any> -``` - -### Numeric expressions - -``` -(number) neg <number> -``` - -... returns the negative value of "number": - -``` -(integer) no <integer> -``` - -... returns 1 if the argument is 0, otherwise it will return 0! If no argument is given, then 0 is returned! - -``` -(integer) yes <integer> -``` - -... always returns 1. The parameter is optional. Example: - -``` -# Prints out 1, because foo is not defined -if yes { say no defined foo; } -``` - -## Control statements - -Control statements available in Fype: - -``` -if <expression> { <statements> } -``` - -... runs the statements if the expression evaluates to a true value. - -``` -ifnot <expression> { <statements> } -``` - -... runs the statements if the expression evaluates to a false value. - -``` -while <expression> { <statements> } -``` - -... runs the statements as long as the expression evaluates to a true value. - -``` -until <expression> { <statements> } -``` - -... runs the statements as long as the expression evaluates to a false value. - -## Scopes - -A new scope starts with an { and ends with an }. An exception is a procedure, which does not use its own scope (see later in this manual). Control statements and functions support scopes. The "scope" function prints out all available symbols at the current scope. Here is a small example: - -``` -my foo = 1; - -{ - # Prints out 1 - put defined foo; - { - my bar = 2; - - # Prints out 1 - put defined bar; - - # Prints out all available symbols at this - # point to stdout. Those are: bar and foo - scope; - } - - # Prints out 0 - put defined bar; - - my baz = 3; -} - -# Prints out 0 -say defined bar; -``` - -Another example including an actual output: - -``` -./fype -e ’my global; func foo { my var4; func bar { my var2, var3; func baz { my var1; scope; } baz; } bar; } foo;’ -Scopes: -Scope stack size: 3 -Global symbols: -SYM_VARIABLE: global (id=00034, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1) -SYM_FUNCTION: foo -Local symbols: -SYM_VARIABLE: var1 (id=00038, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1) -1 level(s) up: -SYM_VARIABLE: var2 (id=00036, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1) -SYM_VARIABLE: var3 (id=00037, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1) -SYM_FUNCTION: baz -2 level(s) up: -SYM_VARIABLE: var4 (id=00035, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1) -SYM_FUNCTION: bar -``` - -## Definedness - -``` -(integer) defined <identifier> -``` - -... returns 1 if "identifier" has been defined. Returns 0 otherwise. - -``` -(integer) undef <identifier> -``` - -... tries to undefine/delete the "identifier". Returns 1 if it succeeded, otherwise 0 is returned. - -## System - -These are some system and interpreter specific built-in functions supported: - -``` -(void) end -``` - -... exits the program with the exit status of 0. - -``` -(void) exit <integer> -``` - -... exits the program with the specified exit status. - -``` -(integer) fork -``` - -... forks a subprocess. It returns 0 for the child process and the pid of the child process otherwise! Example: - -``` -my pid = fork; - -if pid { - put "I am the parent process; child has the pid "; - say pid; - -} ifnot pid { - say "I am the child process"; -} -``` - -To execute the garbage collector do: - -``` -(integer) gc -``` - -It returns the number of items freed! You may wonder why most of the time it will return a value of 0! Fype tries to free not needed memory ASAP. This may change in future versions in order to gain faster execution speed! - -### I/O - -``` -(any) put <any> -``` - -... prints out the argument - -``` -(any) say <any> -``` - -is the same as put, but also includes an ending newline. - -``` -(void) ln -``` - -... just prints a newline. - -## Procedures and functions - -### Procedures - -A procedure can be defined with the "proc" keyword and deleted with the "undef" keyword. A procedure does not return any value and does not support parameter passing. It's using already defined variables (e.g. global variables). A procedure does not have its own namespace. It's using the calling namespace. It is possible to define new variables inside of a procedure in the current namespace. - -``` -proc foo { - say 1 + a * 3 + b; - my c = 6; -} - -my a = 2, b = 4; - -foo; # Run the procedure. Print out "11\n" -say c; # Print out "6\n"; -``` - -### Nested procedures - -It's possible to define procedures inside of procedures. Since procedures don't have its own scope, nested procedures will be available to the current scope as soon as the main procedure has run the first time. You may use the "defined" keyword in order to check if a procedure has been defined or not. - -``` -proc foo { - say "I am foo"; - - undef bar; - proc bar { - say "I am bar"; - } -} - -# Here bar would produce an error because -# the proc is not yet defined! -# bar; - -foo; # Here the procedure foo will define the procedure bar! -bar; # Now the procedure bar is defined! -foo; # Here the procedure foo will redefine bar again! -``` - -### Functions - -A function can be defined with the "func" keyword and deleted with the "undef" keyword. Function do not yet return values and do not yet supports parameter passing. It's using local (lexical scoped) variables. If a certain variable does not exist, when It's using already defined variables (e.g. one scope above). - -``` -func foo { - say 1 + a * 3 + b; - my c = 6; -} - -my a = 2, b = 4; - -foo; # Run the procedure. Print out "11\n" -say c; # Will produce an error, because c is out of scoped! -``` - -### Nested functions - -Nested functions work the same way the nested procedures work, with the exception that nested functions will not be available anymore after the function has been left! - -``` -func foo { - func bar { - say "Hello i am nested"; - } - - bar; # Calling nested -} - -foo; -bar; # Will produce an error, because bar is out of scope! -``` - -## Arrays - -Some progress on arrays has been made too. The following example creates a multi dimensional array "foo". Its first element is the return value of the func which is "bar". The fourth value is a string ”3” converted to a double number. The last element is an anonymous array which itself contains another anonymous array as its last element: - -``` -func bar { say ”bar” } -my foo = [bar, 1, 4/2, double ”3”, [”A”, [”BA”, ”BB”]]]; -say foo; -``` - -It produces the following output: - -``` -% ./fype arrays.fy -bar -01 -2 -3.000000 -A -BA -BB -``` - -## Fancy stuff - -Fancy stuff like OOP or Unicode or threading is not planed. But fancy stuff like function pointers and closures may be considered.:) - -## May the source be with you - -You can find all of this on the GitHub page. There is also an "examples" folders containing some Fype scripts! - -=> https://github.com/snonux/fype - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2011-05-07-perl-daemon-service-framework.gmi b/content/gemtext/gemfeed/2011-05-07-perl-daemon-service-framework.gmi deleted file mode 100644 index addd8911..00000000 --- a/content/gemtext/gemfeed/2011-05-07-perl-daemon-service-framework.gmi +++ /dev/null @@ -1,163 +0,0 @@ -# Perl Daemon (Service Framework) - -``` - a'! _,,_ a'! _,,_ a'! _,,_ - \\_/ \ \\_/ \ \\_/ \.-, - \, /-( /'-,\, /-( /'-, \, /-( / - //\ //\\ //\ //\\ //\ //\\jrei -``` - -> Written by Paul Buetow 2011-05-07, last updated 2021-05-07 - -PerlDaemon is a minimal daemon for Linux and other Unix like operating systems programmed in Perl. It is a minimal but pretty functional and fairly generic service framework. This means that it does not do anything useful other than providing a framework for starting, stopping, configuring and logging. In order to do something useful, a module (written in Perl) must be provided. - -## Features - -PerlDaemon supports: - -* Automatic daemonizing -* Logging -* logrotation (via SIGHUP) -* Clean shutdown support (SIGTERM) -* Pid file support (incl. check on startup) -* Easy to configure -* Easy to extend -* Multi instance support (just use a different directory for each instance). - -## Quick Guide - -``` -# Starting - ./bin/perldaemon start (or shortcut ./control start) - -# Stopping - ./bin/perldaemon stop (or shortcut ./control stop) - -# Alternatively: Starting in foreground -./bin/perldaemon start daemon.daemonize=no (or shortcut ./control foreground) -``` - -To stop a daemon running in foreground mode "Ctrl+C" must be hit. To see more available startup options run "./control" without any argument. - -## How to configure - -The daemon instance can be configured in "./conf/perldaemon.conf". If you want to change a property only once, it is also possible to specify it on command line (that then will take precedence over the config file). All available config properties can be viewed via "./control keys": - -``` -pb@titania:~/svn/utils/perldaemon/trunk$ ./control keys -# Path to the logfile -daemon.logfile=./log/perldaemon.log - -# The amount of seconds until the next event look takes place -daemon.loopinterval=1 - -# Path to the modules dir -daemon.modules.dir=./lib/PerlDaemonModules - -# Specifies either the daemon should run in daemon or foreground mode -daemon.daemonize=yes - -# Path to the pidfile -daemon.pidfile=./run/perldaemon.pid - -# Each module should run every runinterval seconds -daemon.modules.runinterval=3 - -# Path to the alive file (is touched every loopinterval seconds, usable to monitor) -daemon.alivefile=./run/perldaemon.alive - -# Specifies the working directory -daemon.wd=./ -``` - -## Example - -So let's start the daemon with a loop interval of 10 seconds: - -``` -$ ./control keys | grep daemon.loopinterval -daemon.loopinterval=1 -$ ./control keys daemon.loopinterval=10 | grep daemon.loopinterval -daemon.loopinterval=10 -$ ./control start daemon.loopinterval=10; sleep 10; tail -n 2 log/perldaemon.log -Starting daemon now... -Mon Jun 13 11:29:27 2011 (PID 2838): Triggering PerlDaemonModules::ExampleModule -(last triggered before 10.002106s; carry: 7.002106s; wanted interval: 3s) -Mon Jun 13 11:29:27 2011 (PID 2838): ExampleModule Test 2 -$ ./control stop -Stopping daemon now... -``` - -If you want to change that property forever either edit perldaemon.conf or do this: - -``` -$ ./control keys daemon.loopinterval=10 > new.conf; mv new.conf conf/perldaemon.conf -``` - -## HiRes event loop - -PerlDaemon uses `Time::HiRes` to make sure that all the events run in correct intervals. Each loop run a time carry value is recorded and added to the next loop run in order to catch up lost time. - -## Writing your own modules - -### Example module - -This is one of the example modules you will find in the source code. It should be quite self-explanatory if you know Perl :-). - -``` -package PerlDaemonModules::ExampleModule; - -use strict; -use warnings; - -sub new ($$$) { - my ($class, $conf) = @_; - - my $self = bless { conf => $conf }, $class; - - # Store some private module stuff - $self->{counter} = 0; - - return $self; -} - -# Runs periodically in a loop (set interval in perldaemon.conf) -sub do ($) { - my $self = shift; - my $conf = $self->{conf}; - my $logger = $conf->{logger}; - - # Calculate some private module stuff - my $count = ++$self->{counter}; - - $logger->logmsg("ExampleModule Test $count"); -} - -1; -``` - -### Your own module - -Want to give it some better use? It's just as easy as: - -``` - cd ./lib/PerlDaemonModules/ - cp ExampleModule.pm YourModule.pm - vi YourModule.pm - cd - - ./bin/perldaemon restart (or shortcurt ./control restart) -``` - -Now watch `./log/perldaemon.log` closely. It is a good practice to test your modules in 'foreground mode' (see above how to do that). - -BTW: You can install as many modules within the same instance as desired. But they are run in sequential order (in future they can also run in parallel using several threads or processes). - -## May the source be with you - -You can find PerlDaemon (including the examples) at: - -=> https://github.com/snonux/perldaemon - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2014-03-24-the-fibonacci.pl.c-polyglot.gmi b/content/gemtext/gemfeed/2014-03-24-the-fibonacci.pl.c-polyglot.gmi deleted file mode 100644 index 06a463c6..00000000 --- a/content/gemtext/gemfeed/2014-03-24-the-fibonacci.pl.c-polyglot.gmi +++ /dev/null @@ -1,110 +0,0 @@ -# The fibonacci.pl.c Polyglot - -> Written by Paul Buetow 2014-03-24 - -In computing, a polyglot is a computer program or script written in a valid form of multiple programming languages, which performs the same operations or output independent of the programming language used to compile or interpret it - -=> https://en.wikipedia.org/wiki/Polyglot_(computing) - -## The Fibonacci numbers - -For fun, I programmed my own Polyglot, which is both, valid Perl and C code. The interesting part about C is, that $ is a valid character to start variable names with: - -``` -#include <stdio.h> - -#define $arg function_argument -#define my int -#define sub int -#define BEGIN int main(void) - -my $arg; - -sub hello() { - printf("Hello, welcome to Perl-C!\n"); - printf("This program is both, valid C and Perl code!\n"); - printf("It calculates all fibonacci numbers from 0 to 9!\n\n"); - return 0; -} - -sub fibonacci() { - my $n = $arg; - - if ($n < 2) { - return $n; - } - - $arg = $n - 1; - my $fib1 = fibonacci(); - $arg = $n - 2; - my $fib2 = fibonacci(); - - return $fib1 + $fib2; -} - -BEGIN { - hello(); - my $i = 0; - - for ($i = 0; $i <= 10; ++$i) { - $arg = $i; - printf("fib(%d) = %d\n", $i, fibonacci()); - } - - return 0; -} -``` - -You can find the whole source code at GitHub: - -=> https://github.com/snonux/perl-c-fibonacci - -### Let's run it with Perl: - -``` -❯ perl fibonacci.pl.c -Hello, welcome to Perl-C! -This program is both, valid C and Perl code! -It calculates all fibonacci numbers from 0 to 9! - -fib(0) = 0 -fib(1) = 1 -fib(2) = 1 -fib(3) = 2 -fib(4) = 3 -fib(5) = 5 -fib(6) = 8 -fib(7) = 13 -fib(8) = 21 -fib(9) = 34 -fib(10) = 55 -``` - - -### Let's compile it as C and run the binary: - -``` -❯ gcc fibonacci.pl.c -o fibonacci -❯ ./fibonacci -Hello, welcome to Perl-C! -This program is both, valid C and Perl code! -It calculates all fibonacci numbers from 0 to 9! - -fib(0) = 0 -fib(1) = 1 -fib(2) = 1 -fib(3) = 2 -fib(4) = 3 -fib(5) = 5 -fib(6) = 8 -fib(7) = 13 -fib(8) = 21 -fib(9) = 34 -fib(10) = 55 -``` - -It's really fun to play with :-). - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid.gmi b/content/gemtext/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid.gmi deleted file mode 100644 index 03857830..00000000 --- a/content/gemtext/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid.gmi +++ /dev/null @@ -1,180 +0,0 @@ -# Run Debian on your phone with Debroid - -``` - ____ _ _ _ -| _ \ ___| |__ _ __ ___ (_) __| | -| | | |/ _ \ '_ \| '__/ _ \| |/ _` | -| |_| | __/ |_) | | | (_) | | (_| | -|____/ \___|_.__/|_| \___/|_|\__,_| - -``` - -> Written by Paul Buetow 2015-12-05, last updated 2021-05-16 - -You can use the following tutorial to install a full-blown Debian GNU/Linux Chroot on a LG G3 D855 CyanogenMod 13 (Android 6). First of all you need to have root permissions on your phone and you also need to have the developer mode activated. The following steps have been tested on Linux (Fedora 23). - -=> ./2015-12-05-run-debian-on-your-phone-with-debroid/Deboroid.png - -## Foreword - -A couple of years have passed since I last worked on Debroid. At the moment I am using the Termux app on Android, which is less sophisticated than a fully blown Debian installation, but sufficient for my current requirements. The content of this site may be still relevant and it would also work with more recent versions of Debian and Android. I would expect that some minor modifications need to be made though. - -## Step by step guide - -All scripts mentioned here can be found on GitHub at: - -=> https://github.com/snonux/debroid - -### First debootstrap stage - -This is to be performed on a Fedora Linux machine (could work on a Debian too, but Fedora is just what I use on my personal Laptop). The following steps prepare an initial Debian base image, which then later can be transferred to the phone. - -```code -sudo dnf install debootstrap -# 5g -dd if=/dev/zero of=jessie.img bs=$[ 1024 * 1024 ] \ - count=$[ 1024 * 5 ] - -# Show used loop devices -sudo losetup -f -# Store the next free one to $loop -loop=loopN -sudo losetup /dev/$loop jessie.img - -mkdir jessie -sudo mkfs.ext4 /dev/$loop -sudo mount /dev/$loop jessie -sudo debootstrap --foreign --variant=minbase \ - --arch armel jessie jessie/ \ - http://http.debian.net/debian -sudo umount jessie -``` - -### Copy Debian image to the phone - -Now setup the Debian image on an external SD card on the Phone via Android Debugger as follows: - -``` -adb root && adb wait-for-device && adb shell -mkdir -p /storage/sdcard1/Linux/jessie -exit - -# Sparse image problem, may be too big for copying otherwise -gzip jessie.img -# Copy over -adb push jessie.img.gz /storage/sdcard1/Linux/jessie.img.gz -adb shell -cd /storage/sdcard1/Linux -gunzip jessie.img.gz - -# Show used loop devices -losetup -f -# Store the next free one to $loop -loop=loopN - -# Use the next free one (replace the loop number) -losetup /dev/block/$loop $(pwd)/jessie.img -mount -t ext4 /dev/block/$loop $(pwd)/jessie - -# Bind-Mound proc, dev, sys` -busybox mount --bind /proc $(pwd)/jessie/proc -busybox mount --bind /dev $(pwd)/jessie/dev -busybox mount --bind /dev/pts $(pwd)/jessie/dev/pts -busybox mount --bind /sys $(pwd)/jessie/sys - -# Bind-Mound the rest of Android -mkdir -p $(pwd)/jessie/storage/sdcard{0,1} -busybox mount --bind /storage/emulated \ - $(pwd)/jessie/storage/sdcard0 -busybox mount --bind /storage/sdcard1 \ - $(pwd)/jessie/storage/sdcard1 - -# Check mounts -mount | grep jessie -``` - -### Second debootstrap stage - -This is to be performed on the Android phone itself (inside a Debian chroot): - -``` -chroot $(pwd)/jessie /bin/bash -l -export PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin -/debootstrap/debootstrap --second-stage -exit # Leave chroot -exit # Leave adb shell -``` - -### Setup of various scripts - -jessie.sh deals with all the loopback mount magic and so on. It will be run later every time you start Debroid on your phone. - -``` -# Install script jessie.sh -adb push storage/sdcard1/Linux/jessie.sh /storage/sdcard/Linux/jessie.sh -adb shell -cd /storage/sdcard1/Linux -sh jessie.sh enter - -# Bashrc -cat <<END >~/.bashrc -export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH -export EDITOR=vim -hostname $(cat /etc/hostname) -END - -# Fixing an error message while loading the profile -sed -i s#id#/usr/bin/id# /etc/profile - -# Setting the hostname -echo phobos > /etc/hostname -echo 127.0.0.1 phobos > /etc/hosts -hostname phobos - -# Apt-sources -cat <<END > sources.list -deb http://ftp.uk.debian.org/debian/ jessie main contrib non-free -deb-src http://ftp.uk.debian.org/debian/ jessie main contrib non-free -END -apt-get update -apt-get upgrade -apt-get dist-upgrade -exit # Exit chroot -``` - -### Entering Debroid and enable a service - -This enters Debroid on your phone and starts the example service uptimed: - -``` -sh jessie.sh enter - -# Setup example serice uptimed -apt-get install uptimed -cat <<END > /etc/rc.debroid -export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH -service uptimed status &>/dev/null || service uptimed start -exit 0 -END - -chmod 0755 /etc/rc.debroid -exit # Exit chroot -exit # Exit adb shell -``` - -### Include to Android startup: - -I you want to start Debroid automatically every time when your phone starts, then do the following: - -``` -adb push data/local/userinit.sh /data/local/userinit.sh -adb shell -chmod +x /data/local/userinit.sh -exit -``` - -Reboot & test! Enjoy! - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid/Deboroid.png b/content/gemtext/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid/Deboroid.png Binary files differdeleted file mode 100644 index f76cf226..00000000 --- a/content/gemtext/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid/Deboroid.png +++ /dev/null diff --git a/content/gemtext/gemfeed/2016-04-03-offsite-backup-with-zfs.gmi b/content/gemtext/gemfeed/2016-04-03-offsite-backup-with-zfs.gmi deleted file mode 100644 index cbde3ce6..00000000 --- a/content/gemtext/gemfeed/2016-04-03-offsite-backup-with-zfs.gmi +++ /dev/null @@ -1,42 +0,0 @@ -# Offsite backup with ZFS - -``` - ________________ -|# : : #| -| : ZFS/GELI : | -| : Offsite : | -| : Backup : | -| :___________: | -| _________ | -| | __ | | -| || | | | -\____||__|_____|__| -``` - -> Written by Paul Buetow 2016-04-03 - -## Please don't lose all my pictures again! - -When it comes to data storage and potential data loss I am a paranoid person. It is not just due to my job but also due to a personal experience I encountered over 10 years ago: A single drive failure and loss of all my data (pictures, music, ....). - -A little about my personal infrastructure: I am running my own (mostly FreeBSD based) root servers (across several countries: Two in Germany, one in Canada, one in Bulgaria) which store all my online data (E-Mail and my Git repositories). I am syncing incremental (and encrypted) ZFS snapshots between these servers forth and back so either data could be recovered from the other server. - -## Local storage box for offline data - -Also, I am operating a local server (an HP MicroServer) at home in my apartment. Full snapshots of all ZFS volumes are pulled from the "online" servers to the local server every other week and the incremental ZFS snapshots every day. That local server has a ZFS ZMIRROR with 3 disks configured (local triple redundancy). I keep up to half a year worth of ZFS snapshots of all volumes. That local server also contains all my offline data such as pictures, private documents, videos, books, various other backups, etc. - -Once weekly all the data of that local server is copied to two external USB drives as a backup (without the historic snapshots). For simplicity these USB drives are not formatted with ZFS but with good old UFS. This gives me a chance to recover from a (potential) ZFS disaster. ZFS is a complex thing. Sometimes it is good not to trust complex things! - -## Storing it at my apartment is not enough - -Now I am thinking about an offsite backup of all this local data. The problem is, that all the data remains on a single physical location: My local MicroServer. What happens when the house burns or someone steals my server including the internal disks and the attached USB drives? My first thought was to back up everything to the "cloud". The major issue here is however the limited amount of available upload bandwidth (only 1MBit/s). - -The solution is adding another USB drive (2TB) with an encryption container (GELI) and a ZFS pool on it. The GELI encryption requires a secret key and a secret passphrase. I am updating the data to that drive once every 3 months (my calendar is reminding me about it) and afterwards I keep that drive at a secret location outside of my apartment. All the information needed to decrypt (mounting the GELI container) is stored at another (secure) place. Key and passphrase are kept at different places though. Even if someone would know of it, he would not be able to decrypt it as some additional insider knowledge would be required as well. - -## Walking one round less - -I am thinking of buying a second 2TB USB drive and to set it up the same way as the first one. So I could alternate the backups. One drive would be at the secret location, and the other drive would be at home. And these drives would swap location after each cycle. This would give some security about the failure of that drive and I would have to go to the secret location only once (swapping the drives) instead of twice (picking that drive up in order to update the data + bringing it back to the secret location). - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi b/content/gemtext/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi deleted file mode 100644 index 8ef18d8f..00000000 --- a/content/gemtext/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi +++ /dev/null @@ -1,392 +0,0 @@ -# Jails and ZFS with Puppet on FreeBSD - -``` - __ __ - (( \---/ )) - )__ __( - / ()___() \ - \ /(_)\ / - \ \_|_/ / - _______> <_______ - //\ |>o<| /\\ - \\/___ ___\// - | | - | | - | | - | | - `--....---' - \ \ - \ `. hjw - \ `. -``` - -> Written by Paul Buetow 2016-04-09 - -Over the last couple of years I wrote quite a few Puppet modules in order to manage my personal server infrastructure. One of them manages FreeBSD Jails and another one ZFS file systems. I thought I would give a brief overview in how it looks and feels. - -=> https://github.com/snonux/puppet-modules - -## ZFS - -The ZFS module is a pretty basic one. It does not manage ZFS pools yet as I am not creating them often enough which would justify implementing an automation. But let's see how we can create a ZFS file system (on an already given ZFS pool named ztank): - -Puppet snippet: - -``` -zfs::create { 'ztank/foo': - ensure => present, - filesystem => '/srv/foo', - - require => File['/srv'], -} -``` - -Puppet run: - -``` -admin alphacentauri:/opt/git/server/puppet/manifests [1212]% puppet.apply -Password: -Info: Loading facts -Info: Loading facts -Info: Loading facts -Info: Loading facts -Notice: Compiled catalog for alphacentauri.home in environment production in 7.14 seconds -Info: Applying configuration version '1460189837' -Info: mount[files]: allowing * access -Info: mount[restricted]: allowing * access -Notice: /Stage[main]/Main/Node[alphacentauri]/Zfs::Create[ztank/foo]/Exec[ztank/foo_create]/returns: executed successfully -Notice: Finished catalog run in 25.41 seconds -admin alphacentauri:~ [1213]% zfs list | grep foo -ztank/foo 96K 1.13T 96K /srv/foo -admin alphacentauri:~ [1214]% df | grep foo -ztank/foo 1214493520 96 1214493424 0% /srv/foo -admin alphacentauri:~ [1215]% -``` - -The destruction of the file system just requires to set "ensure" to "absent" in Puppet: - -``` -zfs::create { 'ztank/foo': - ensure => absent, - filesystem => '/srv/foo', - - require => File['/srv'], -}¬ -``` - -Puppet run: - -``` -admin alphacentauri:/opt/git/server/puppet/manifests [1220]% puppet.apply -Password: -Info: Loading facts -Info: Loading facts -Info: Loading facts -Info: Loading facts -Notice: Compiled catalog for alphacentauri.home in environment production in 6.14 seconds -Info: Applying configuration version '1460190203' -Info: mount[files]: allowing * access -Info: mount[restricted]: allowing * access -Notice: /Stage[main]/Main/Node[alphacentauri]/Zfs::Create[ztank/foo]/Exec[zfs destroy -r ztank/foo]/returns: executed successfully -Notice: Finished catalog run in 22.72 seconds -admin alphacentauri:/opt/git/server/puppet/manifests [1221]% zfs list | grep foo -zsh: done zfs list | -zsh: exit 1 grep foo -admin alphacentauri:/opt/git/server/puppet/manifests [1222:1]% df | grep foo -zsh: done df | -zsh: exit 1 grep foo -``` - -## Jails - -Here is an example in how a FreeBSD Jail can be created. The Jail will have its own public IPv6 address. And it will have its own internal IPv4 address with IPv4 NAT to the internet (this is due to the limitation that the host server only got one public IPv4 address which requires sharing between all the Jails). - -Furthermore, Puppet will ensure that the Jail will have its own ZFS file system (internally it is using the ZFS module). Please notice that the NAT requires the packet filter to be setup correctly (not covered in this blog post). - -``` -include jail::freebsd - -# Cloned interface for Jail IPv4 NAT -freebsd::rc_config { 'cloned_interfaces': - value => 'lo1', -} -freebsd::rc_config { 'ipv4_addrs_lo1': - value => '192.168.0.1-24/24' -} - -freebsd::ipalias { '2a01:4f8:120:30e8::17': - ensure => up, - proto => 'inet6', - preflen => '64', - interface => 're0', - aliasnum => '8', -} - -class { 'jail': - ensure => present, - jails_config => { - sync => { - '_ensure' => present, - '_type' => 'freebsd', - '_mirror' => 'ftp://ftp.de.freebsd.org', - '_remote_path' => 'FreeBSD/releases/amd64/10.1-RELEASE', - '_dists' => [ 'base.txz', 'doc.txz', ], - '_ensure_directories' => [ '/opt', '/opt/enc' ], - '_ensure_zfs' => [ '/sync' ], - 'host.hostname' => "'sync.ian.buetow.org'", - 'ip4.addr' => '192.168.0.17', - 'ip6.addr' => '2a01:4f8:120:30e8::17', - }, - } -} -``` - -This is how the result looks like: - -``` -admin sun:/etc [1939]% puppet.apply -Info: Loading facts -Info: Loading facts -Info: Loading facts -Info: Loading facts -Notice: Compiled catalog for sun.ian.buetow.org in environment production in 1.80 seconds -Info: Applying configuration version '1460190986' -Notice: /Stage[main]/Jail/File[/etc/jail.conf]/ensure: created -Info: mount[files]: allowing * access -Info: mount[restricted]: allowing * access -Info: Computing checksum on file /etc/motd -Info: /Stage[main]/Motd/File[/etc/motd]: Filebucketed /etc/motd to puppet with sum fced1b6e89f50ef2c40b0d7fba9defe8 -Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync]/ensure: created -Notice: /Stage[main]/Jail/Jail::Create[sync]/Zfs::Create[zroot/jail/sync]/Exec[zroot/jail/sync_create]/returns: executed successfully -Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync/opt]/ensure: created -Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync/opt/enc]/ensure: created -Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Ensure_zfs[/sync]/Zfs::Create[zroot/jail/sync/sync]/Exec[zroot/jail/sync/sync_create]/returns: executed successfully -Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/jail/sync/.jailbootstrap]/ensure: created -Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/etc/fstab.jail.sync]/ensure: created -Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/jail/sync/.jailbootstrap/bootstrap.sh]/ensure: created -Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/Exec[sync_bootstrap]/returns: executed successfully -Notice: Finished catalog run in 49.72 seconds -admin sun:/etc [1942]% ls -l /jail/sync -total 154 --r--r--r-- 1 root wheel 6198 11 Nov 2014 COPYRIGHT -drwxr-xr-x 2 root wheel 47 11 Nov 2014 bin -drwxr-xr-x 7 root wheel 43 11 Nov 2014 boot -dr-xr-xr-x 2 root wheel 2 11 Nov 2014 dev -drwxr-xr-x 23 root wheel 101 9 Apr 10:37 etc -drwxr-xr-x 3 root wheel 50 11 Nov 2014 lib -drwxr-xr-x 3 root wheel 4 11 Nov 2014 libexec -drwxr-xr-x 2 root wheel 2 11 Nov 2014 media -drwxr-xr-x 2 root wheel 2 11 Nov 2014 mnt -drwxr-xr-x 3 root wheel 3 9 Apr 10:36 opt -dr-xr-xr-x 2 root wheel 2 11 Nov 2014 proc -drwxr-xr-x 2 root wheel 143 11 Nov 2014 rescue -drwxr-xr-x 2 root wheel 6 11 Nov 2014 root -drwxr-xr-x 2 root wheel 132 11 Nov 2014 sbin -drwxr-xr-x 2 root wheel 2 9 Apr 10:36 sync -lrwxr-xr-x 1 root wheel 11 11 Nov 2014 sys -> usr/src/sys -drwxrwxrwt 2 root wheel 2 11 Nov 2014 tmp -drwxr-xr-x 14 root wheel 14 11 Nov 2014 usr -drwxr-xr-x 24 root wheel 24 11 Nov 2014 var -admin sun:/etc [1943]% zfs list | grep sync;df | grep sync -zroot/jail/sync 162M 343G 162M /jail/sync -zroot/jail/sync/sync 144K 343G 144K /jail/sync/sync -/opt/enc 5061624 84248 4572448 2% /jail/sync/opt/enc -zroot/jail/sync 360214972 166372 360048600 0% /jail/sync -zroot/jail/sync/sync 360048744 144 360048600 0% /jail/sync/sync -admin sun:/etc [1944]% cat /etc/fstab.jail.sync -# Generated by Puppet for a Jail. -# Can contain file systems to be mounted curing jail start. -admin sun:/etc [1945]% cat /etc/jail.conf -# Generated by Puppet - -allow.chflags = true; -exec.start = '/bin/sh /etc/rc'; -exec.stop = '/bin/sh /etc/rc.shutdown'; -mount.devfs = true; -mount.fstab = "/etc/fstab.jail.$name"; -path = "/jail/$name"; - -sync { - host.hostname = 'sync.ian.buetow.org'; - ip4.addr = 192.168.0.17; - ip6.addr = 2a01:4f8:120:30e8::17; -} -admin sun:/etc [1955]% sudo service jail start sync -Password: -Starting jails: sync. -admin sun:/etc [1956]% jls | grep sync - 103 192.168.0.17 sync.ian.buetow.org /jail/sync -admin sun:/etc [1957]% sudo jexec 103 /bin/csh -root@sync:/ # ifconfig -a -re0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 - options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE> - ether 50:46:5d:9f:fd:1e - inet6 2a01:4f8:120:30e8::17 prefixlen 64 - nd6 options=8021<PERFORMNUD,AUTO_LINKLOCAL,DEFAULTIF> - media: Ethernet autoselect (1000baseT <full-duplex>) - status: active -lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 - options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6> - nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> - lo1: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 - options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6> - inet 192.168.0.17 netmask 0xffffffff - nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL> -``` - -## Inside-Jail Puppet - -To automatically setup the applications running in the Jail I am using Puppet as well. I wrote a few scripts which bootstrap Puppet inside of a newly created Jail. It is doing the following: - -* Mounts an encrypted container (containing a secret Puppet manifests [git repository]) -* Activates "pkg-ng", the FreeBSD binary package manager, in the Jail -* Installs Puppet plus all dependencies in the Jail -* Updates the Jail via "freebsd-update" to the latest version -* Restarts the Jail and invokes Puppet. -* Puppet then also schedules a periodic cron job for the next Puppet runs. - -``` -admin sun:~ [1951]% sudo /opt/snonux/local/etc/init.d/enc activate sync -Starting jails: dns. -The package management tool is not yet installed on your system. -Do you want to fetch and install it now? [y/N]: y -Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/freebsd:10:x86:64/latest, please wait... -Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done -[sync.ian.buetow.org] Installing pkg-1.7.2... -[sync.ian.buetow.org] Extracting pkg-1.7.2: 100% -Updating FreeBSD repository catalogue... -[sync.ian.buetow.org] Fetching meta.txz: 100% 944 B 0.9kB/s 00:01 -[sync.ian.buetow.org] Fetching packagesite.txz: 100% 5 MiB 5.6MB/s 00:01 -Processing entries: 100% -FreeBSD repository update completed. 25091 packages processed. -Updating database digests format: 100% -The following 20 package(s) will be affected (of 0 checked): - - New packages to be INSTALLED: - git: 2.7.4_1 - expat: 2.1.0_3 - python27: 2.7.11_1 - libffi: 3.2.1 - indexinfo: 0.2.4 - gettext-runtime: 0.19.7 - p5-Error: 0.17024 - perl5: 5.20.3_9 - cvsps: 2.1_1 - p5-Authen-SASL: 2.16_1 - p5-Digest-HMAC: 1.03_1 - p5-GSSAPI: 0.28_1 - curl: 7.48.0_1 - ca_root_nss: 3.22.2 - p5-Net-SMTP-SSL: 1.03 - p5-IO-Socket-SSL: 2.024 - p5-Net-SSLeay: 1.72 - p5-IO-Socket-IP: 0.37 - p5-Socket: 2.021 - p5-Mozilla-CA: 20160104 - - The process will require 144 MiB more space. - 30 MiB to be downloaded. -[sync.ian.buetow.org] Fetching git-2.7.4_1.txz: 100% 4 MiB 3.7MB/s 00:01 -[sync.ian.buetow.org] Fetching expat-2.1.0_3.txz: 100% 98 KiB 100.2kB/s 00:01 -[sync.ian.buetow.org] Fetching python27-2.7.11_1.txz: 100% 10 MiB 10.7MB/s 00:01 -[sync.ian.buetow.org] Fetching libffi-3.2.1.txz: 100% 35 KiB 36.2kB/s 00:01 -[sync.ian.buetow.org] Fetching indexinfo-0.2.4.txz: 100% 5 KiB 5.0kB/s 00:01 -[sync.ian.buetow.org] Fetching gettext-runtime-0.19.7.txz: 100% 148 KiB 151.1kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Error-0.17024.txz: 100% 24 KiB 24.8kB/s 00:01 -[sync.ian.buetow.org] Fetching perl5-5.20.3_9.txz: 100% 13 MiB 6.9MB/s 00:02 -[sync.ian.buetow.org] Fetching cvsps-2.1_1.txz: 100% 41 KiB 42.1kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Authen-SASL-2.16_1.txz: 100% 44 KiB 45.1kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Digest-HMAC-1.03_1.txz: 100% 9 KiB 9.5kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-GSSAPI-0.28_1.txz: 100% 41 KiB 41.7kB/s 00:01 -[sync.ian.buetow.org] Fetching curl-7.48.0_1.txz: 100% 2 MiB 2.2MB/s 00:01 -[sync.ian.buetow.org] Fetching ca_root_nss-3.22.2.txz: 100% 324 KiB 331.4kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Net-SMTP-SSL-1.03.txz: 100% 11 KiB 10.8kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-IO-Socket-SSL-2.024.txz: 100% 153 KiB 156.4kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Net-SSLeay-1.72.txz: 100% 234 KiB 239.3kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-IO-Socket-IP-0.37.txz: 100% 27 KiB 27.4kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Socket-2.021.txz: 100% 37 KiB 38.0kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Mozilla-CA-20160104.txz: 100% 147 KiB 150.8kB/s 00:01 -Checking integrity... -[sync.ian.buetow.org] [1/12] Installing libyaml-0.1.6_2... -[sync.ian.buetow.org] [1/12] Extracting libyaml-0.1.6_2: 100% -[sync.ian.buetow.org] [2/12] Installing libedit-3.1.20150325_2... -[sync.ian.buetow.org] [2/12] Extracting libedit-3.1.20150325_2: 100% -[sync.ian.buetow.org] [3/12] Installing ruby-2.2.4,1... -[sync.ian.buetow.org] [3/12] Extracting ruby-2.2.4,1: 100% -[sync.ian.buetow.org] [4/12] Installing ruby22-gems-2.6.2... -[sync.ian.buetow.org] [4/12] Extracting ruby22-gems-2.6.2: 100% -[sync.ian.buetow.org] [5/12] Installing libxml2-2.9.3... -[sync.ian.buetow.org] [5/12] Extracting libxml2-2.9.3: 100% -[sync.ian.buetow.org] [6/12] Installing dmidecode-3.0... -[sync.ian.buetow.org] [6/12] Extracting dmidecode-3.0: 100% -[sync.ian.buetow.org] [7/12] Installing rubygem-json_pure-1.8.3... -[sync.ian.buetow.org] [7/12] Extracting rubygem-json_pure-1.8.3: 100% -[sync.ian.buetow.org] [8/12] Installing augeas-1.4.0... -[sync.ian.buetow.org] [8/12] Extracting augeas-1.4.0: 100% -[sync.ian.buetow.org] [9/12] Installing rubygem-facter-2.4.4... -[sync.ian.buetow.org] [9/12] Extracting rubygem-facter-2.4.4: 100% -[sync.ian.buetow.org] [10/12] Installing rubygem-hiera1-1.3.4_1... -[sync.ian.buetow.org] [10/12] Extracting rubygem-hiera1-1.3.4_1: 100% -[sync.ian.buetow.org] [11/12] Installing rubygem-ruby-augeas-0.5.0_2... -[sync.ian.buetow.org] [11/12] Extracting rubygem-ruby-augeas-0.5.0_2: 100% -[sync.ian.buetow.org] [12/12] Installing puppet38-3.8.4_1... -===> Creating users and/or groups. -Creating group 'puppet' with gid '814'. -Creating user 'puppet' with uid '814'. -[sync.ian.buetow.org] [12/12] Extracting puppet38-3.8.4_1: 100% -. -. -. -. -. -Looking up update.FreeBSD.org mirrors... 4 mirrors found. -Fetching public key from update4.freebsd.org... done. -Fetching metadata signature for 10.1-RELEASE from update4.freebsd.org... done. -Fetching metadata index... done. -Fetching 2 metadata files... done. -Inspecting system... done. -Preparing to download files... done. -Fetching 874 patches.....10....20....30.... -. -. -. -Applying patches... done. -Fetching 1594 files... -Installing updates... -done. -Info: Loading facts -Info: Loading facts -Info: Loading facts -Info: Loading facts -Could not retrieve fact='pkgng_version', resolution='<anonymous>': undefined method `pkgng_enabled' for Facter:Module -Warning: Config file /usr/local/etc/puppet/hiera.yaml not found, using Hiera defaults -Notice: Compiled catalog for sync.ian.buetow.org in environment production in 1.31 seconds -Warning: Found multiple default providers for package: pkgng, gem, pip; using pkgng -Info: Applying configuration version '1460192563' -Notice: /Stage[main]/S_base_freebsd/User[root]/shell: shell changed '/bin/csh' to '/bin/tcsh' -Notice: /Stage[main]/S_user::Root_files/S_user::All_files[root_user]/File[/root/user]/ensure: created -Notice: /Stage[main]/S_user::Root_files/S_user::My_files[root]/File[/root/userfiles]/ensure: created -Notice: /Stage[main]/S_user::Root_files/S_user::My_files[root]/File[/root/.task]/ensure: created -. -. -. -. -Notice: Finished catalog run in 206.09 seconds -``` - -## Managing multiple Jails - -Of course I am operating multiple Jails on the same host this way with Puppet: - -* A Jail for the MTA -* A Jail for the Webserver -* A Jail for BIND DNS server -* A Jail for syncing data forth and back between various servers -* A Jail for other personal (experimental) use -* ...etc - -All done in a pretty automated manor. - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.gmi b/content/gemtext/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.gmi deleted file mode 100644 index beb1ab9f..00000000 --- a/content/gemtext/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.gmi +++ /dev/null @@ -1,30 +0,0 @@ -# Offsite backup with ZFS (Part 2) - -``` - ________________ -|# : : #| -| : ZFS/GELI : |________________ -| : Offsite : |# : : #| -| : Backup 1 : | : ZFS/GELI : | -| :___________: | : Offsite : | -| _________ | : Backup 2 : | -| | __ | | :___________: | -| || | | | _________ | -\____||__|_____|_| | __ | | - | || | | | - \____||__|_____|__| -``` - -> Written by Paul Buetow 2016-04-16 - -=> ./2016-04-03-offsite-backup-with-zfs.gmi Read the first part before reading any furter here... - -I enhanced the procedure a bit. From now on I am having two external 2TB USB hard drives. Both are setup exactly the same way. To decrease the probability that they will not fail at about the same time both drives are of different brands. One drive is kept at the secret location. The other one is kept at home right next to my HP MicroServer. - -Whenever I am updating offsite backup, I am doing it to the drive which is kept locally. Afterwards I bring it to the secret location and swap the drives and bring the other one back home. This ensures that I will always have an offiste backup available at a different location than my home - even while updating one copy of it. - -Furthermore, I added scrubbing (*zpool scrub...*) to the script. It ensures that the file system is consistent and that there are no bad blocks on the disk and the file system. To increase the reliability I also run a *zfs set copies=2 zroot*. That setting is also synchronized to the offsite ZFS pool. ZFS stores every data block to disk twice now. Yes, it consumes twice as much disk space but it makes it better fault tolerant against hardware errors (e.g. only individual disk sectors going bad). - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi b/content/gemtext/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi deleted file mode 100644 index 1be6fa74..00000000 --- a/content/gemtext/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi +++ /dev/null @@ -1,239 +0,0 @@ -# Spinning up my own authoritative DNS servers - -> Written by Paul Buetow 2016-05-22 - -## Background - -Finally, I had time to deploy my own authoritative DNS servers (master and slave) for my domains "buetow.org" and "buetow.zone". My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to set your own authoritative DNS servers for your domains. From now, I am making use of that option. - -=> http://www.schlundtech.de Schlund Technologies - -## All FreeBSD Jails - -In order to set up my authoritative DNS servers I installed a FreeBSD Jail dedicated for DNS with Puppet on my root machine as follows: - -``` -include freebsd - -freebsd::ipalias { '2a01:4f8:120:30e8::14': - ensure => up, - proto => 'inet6', - preflen => '64', - interface => 're0', - aliasnum => '5', -} - -include jail::freebsd - -class { 'jail': - ensure => present, - jails_config => { - dns => { - '_ensure' => present, - '_type' => 'freebsd', - '_mirror' => 'ftp://ftp.de.freebsd.org', - '_remote_path' => 'FreeBSD/releases/amd64/10.1-RELEASE', - '_dists' => [ 'base.txz', 'doc.txz', ], - '_ensure_directories' => [ '/opt', '/opt/enc' ], - 'host.hostname' => "'dns.ian.buetow.org'", - 'ip4.addr' => '192.168.0.15', - 'ip6.addr' => '2a01:4f8:120:30e8::15', - }, - . - . - } -} -``` - -## PF firewall - -Please note that "dns.ian.buetow.org" is just the Jail name of the master DNS server (and "caprica.ian.buetow.org" the name of the Jail for the slave DNS server) and that I am using the DNS names "dns1.buetow.org" (master) and "dns2.buetow.org" (slave) for the actual service names (these are the DNS servers visible to the public). Please also note that the IPv4 address is an internal one. I have a PF to use NAT and PAT. The DNS ports are being forwarded (TCP and UDP) to that Jail. By default, all ports are blocked, so I am adding an exception rule for the IPv6 address as well. These are the PF rules in use: - -``` -% cat /etc/pf.conf -. -. -# dns.ian.buetow.org -rdr pass on re0 proto tcp from any to $pub_ip port {53} -> 192.168.0.15 -rdr pass on re0 proto udp from any to $pub_ip port {53} -> 192.168.0.15 -pass in on re0 inet6 proto tcp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state -pass in on re0 inet6 proto udp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state -. -. -``` - -## Puppet managed BIND zone files - -In "manifests/dns.pp" (the Puppet manifest for the Master DNS Jail itself) I configured the BIND DNS server this way: - -``` -class { 'bind_freebsd': - config => "puppet:///files/bind/named.${::hostname}.conf", - dynamic_config => "puppet:///files/bind/dynamic.${::hostname}", -} -``` - -The Puppet module is actually a pretty simple one. It installs the file "/usr/local/etc/named/named.conf" and it populates the "/usr/local/etc/named/dynamicdb" directory with all my zone files. - -Once (Puppet-) applied inside of the Jail I get this: - -``` -paul uranus:~/git/blog/source [4268]% ssh admin@dns1.buetow.org.buetow.org pgrep -lf named -60748 /usr/local/sbin/named -u bind -c /usr/local/etc/namedb/named.conf -paul uranus:~/git/blog/source [4269]% ssh admin@dns1.buetow.org.buetow.org tail -n 13 /usr/local/etc/namedb/named.conf -zone "buetow.org" { - type master; - notify yes; - allow-update { key "buetoworgkey"; }; - file "/usr/local/etc/namedb/dynamic/buetow.org"; -}; - -zone "buetow.zone" { - type master; - notify yes; - allow-update { key "buetoworgkey"; }; - file "/usr/local/etc/namedb/dynamic/buetow.zone"; -}; -paul uranus:~/git/blog/source [4277]% ssh admin@dns1.buetow.org.buetow.org cat /usr/local/etc/namedb/dynamic/buetow.org -$TTL 3600 -@ IN SOA dns1.buetow.org. domains.buetow.org. ( - 25 ; Serial - 604800 ; Refresh - 86400 ; Retry - 2419200 ; Expire - 604800 ) ; Negative Cache TTL -; Infrastructure domains -@ IN NS dns1 -@ IN NS dns2 -* 300 IN CNAME web.ian -buetow.org. 86400 IN A 78.46.80.70 -buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8:0:0:0:11 -buetow.org. 86400 IN MX 10 mail.ian -dns1 86400 IN A 78.46.80.70 -dns1 86400 IN AAAA 2a01:4f8:120:30e8:0:0:0:15 -dns2 86400 IN A 164.177.171.32 -dns2 86400 IN AAAA 2a03:2500:1:6:20:: -. -. -. -. -``` - -That is my master DNS server. My slave DNS server runs in another Jail on another bare metal machine. Everything is set up similar to the master DNS server. However, that server is located in a different DC and in different IP subnets. The only difference is the "named.conf". It's configured to be a slave and that means that the "dynamicdb" gets populated by BIND itself while doing zone transfers from the master. - -``` -paul uranus:~/git/blog/source [4279]% ssh admin@dns2.buetow.org tail -n 11 /usr/local/etc/namedb/named.conf -zone "buetow.org" { - type slave; - masters { 78.46.80.70; }; - file "/usr/local/etc/namedb/dynamic/buetow.org"; -}; - -zone "buetow.zone" { - type slave; - masters { 78.46.80.70; }; - file "/usr/local/etc/namedb/dynamic/buetow.zone"; -}; -``` - -## The end result - -The end result looks like this now: - -``` -% dig -t ns buetow.org -; <<>> DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 <<>> -t ns buetow.org -;; global options: +cmd -;; Got answer: -;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37883 -;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 - -;; OPT PSEUDOSECTION: -; EDNS: version: 0, flags:; udp: 512 -;; QUESTION SECTION: -;buetow.org. IN NS - -;; ANSWER SECTION: -buetow.org. 600 IN NS dns2.buetow.org. -buetow.org. 600 IN NS dns1.buetow.org. - -;; Query time: 41 msec -;; SERVER: 192.168.1.254#53(192.168.1.254) -;; WHEN: Sun May 22 11:34:11 BST 2016 -;; MSG SIZE rcvd: 77 - -% dig -t any buetow.org @dns1.buetow.org -; <<>> DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 <<>> -t any buetow.org @dns1.buetow.org -;; global options: +cmd -;; Got answer: -;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49876 -;; flags: qr aa rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 7 - -;; OPT PSEUDOSECTION: -; EDNS: version: 0, flags:; udp: 4096 -;; QUESTION SECTION: -;buetow.org. IN ANY - -;; ANSWER SECTION: -buetow.org. 86400 IN A 78.46.80.70 -buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::11 -buetow.org. 86400 IN MX 10 mail.ian.buetow.org. -buetow.org. 3600 IN SOA dns1.buetow.org. domains.buetow.org. 25 604800 86400 2419200 604800 -buetow.org. 3600 IN NS dns2.buetow.org. -buetow.org. 3600 IN NS dns1.buetow.org. - -;; ADDITIONAL SECTION: -mail.ian.buetow.org. 86400 IN A 78.46.80.70 -dns1.buetow.org. 86400 IN A 78.46.80.70 -dns2.buetow.org. 86400 IN A 164.177.171.32 -mail.ian.buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::12 -dns1.buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::15 -dns2.buetow.org. 86400 IN AAAA 2a03:2500:1:6:20:: - -;; Query time: 42 msec -;; SERVER: 78.46.80.70#53(78.46.80.70) -;; WHEN: Sun May 22 11:34:41 BST 2016 -;; MSG SIZE rcvd: 322 -``` - -## Monitoring - -For monitoring I am using Icinga2 (I am operating two Icinga2 instances in two different DCs). I may have to post another blog article about Icinga2 but to get the idea these were the snippets added to my Icinga2 configuration: - -``` -apply Service "dig" { - import "generic-service" - - check_command = "dig" - vars.dig_lookup = "buetow.org" - vars.timeout = 30 - - assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org" -} - -apply Service "dig6" { - import "generic-service" - - check_command = "dig" - vars.dig_lookup = "buetow.org" - vars.timeout = 30 - vars.check_ipv6 = true - - assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org" -} -``` - -## DNS update workflow - -Whenever I have to change a DNS entry all have to do is: - -* Git clone or update the Puppet repository -* Update/commit and push the zone file (e.g. "buetow.org") -* Wait for Puppet. Puppet will deploy that updated zone file. And it will reload the BIND server. -* The BIND server will notify all slave DNS servers (at the moment only one). And it will transfer the new version of the zone. - -That's much more comfortable now than manually clicking at some web UIs at Schlund Technologies. - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2016-11-20-methods-in-c.gmi b/content/gemtext/gemfeed/2016-11-20-methods-in-c.gmi deleted file mode 100644 index d2035252..00000000 --- a/content/gemtext/gemfeed/2016-11-20-methods-in-c.gmi +++ /dev/null @@ -1,86 +0,0 @@ -# Methods in C - -> Written by Paul Buetow 2016-11-20 - -You can do some sort of object oriented programming in the C Programming Language. However, that is very limited. But also very easy and straight forward to use. - -## Example - -Lets have a look at the following sample program. Basically all you have to do is to add a function pointer such as "calculate" to the definition of struct "something_s". Later, during the struct initialization, assign a function address to that function pointer: - -``` -#include <stdio.h> - -typedef struct { - double (*calculate)(const double, const double); - char *name; -} something_s; - -double multiplication(const double a, const double b) { - return a * b; -} - -double division(const double a, const double b) { - return a / b; -} - -int main(void) { - something_s mult = (something_s) { - .calculate = multiplication, - .name = "Multiplication" - }; - - something_s div = (something_s) { - .calculate = division, - .name = "Division" - }; - - const double a = 3, b = 2; - - printf("%s(%f, %f) => %f\n", mult.name, a, b, mult.calculate(a,b)); - printf("%s(%f, %f) => %f\n", div.name, a, b, div.calculate(a,b)); -} -``` - -As you can see you can call the function (pointed by the function pointer) the same way as in C++ or Java via: - -``` -printf("%s(%f, %f) => %f\n", mult.name, a, b, mult.calculate(a,b)); -printf("%s(%f, %f) => %f\n", div.name, a, b, div.calculate(a,b)); -``` - -However, that's just syntactic sugar for: - -``` -printf("%s(%f, %f) => %f\n", mult.name, a, b, (*mult.calculate)(a,b)); -printf("%s(%f, %f) => %f\n", div.name, a, b, (*div.calculate)(a,b)); -``` - -Output: - -``` -pbuetow ~/git/blog/source [38268]% gcc methods-in-c.c -o methods-in-c -pbuetow ~/git/blog/source [38269]% ./methods-in-c -Multiplication(3.000000, 2.000000) => 6.000000 -Division(3.000000, 2.000000) => 1.500000 -``` - -Not complicated at all, but nice to know and helps to make the code easier to read! - -## The flaw - -That's actually not really how it works in object oriented languages such as Java and C++. The method call in this example is not really a method call as "mult" and "div" in this example are not "message receivers". What I mean by that is that the functions can not access the state of the "mult" and "div" struct objects. In C you would need to do something like this instead if you wanted to access the state of "mult" from within the calculate function, you would have to pass it as an argument: - -``` -mult.calculate(mult,a,b)); -``` - -How to overcome this? You need to take it further... - -## Taking it further - -If you want to take it further type "Object-Oriented Programming with ANSI-C" into your favorite internet search engine, you will find some crazy stuff. Some go as far as writing a C preprocessor in AWK, which takes some object oriented pseudo-C and transforms it to plain C so that the C compiler can compile it to machine code. This is actually similar to how the C++ language had its origins. - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux.gmi b/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux.gmi deleted file mode 100644 index 53bb4575..00000000 --- a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux.gmi +++ /dev/null @@ -1,191 +0,0 @@ -# Realistic load testing with I/O Riot for Linux - -``` - .---. - / \ - \.@-@./ - /`\_/`\ - // _ \\ - | \ )|_ - /`\_`> <_/ \ -jgs\__/'---'\__/ -``` - -> Written by Paul Buetow 2018-06-01, last updated 2021-05-08 - -## Foreword - -This text first was published in the german IT-Administrator computer Magazine. 3 years have passed since and I decided to publish it on my blog too. - -=> https://www.admin-magazin.de/Das-Heft/2018/06/Realistische-Lasttests-mit-I-O-Riot - -I havn't worked on I/O Riot for some time now, but all what is written here is still valid. I am still using I/O Riot to debug I/O issues and pattern once in a while, so by all means the tool is not obsolete yet. The tool even helped to resolve a major production incident at work caused by disk I/O. - -I am eagerly looking forward to revamp I/O Riot so that it uses the new BPF Linux capabilities instead of plain old Systemtap (or alternatively: Newer versions of Systemtap can also use BPF as the backend I have learned). Also, when I wrote I/O Riot initially, I didn't have any experience with the Go programming language yet and therefore I wrote it in C. Once it gets revamped I might consider using Go instead of C as it would spare me from many segmentation faults and headaches during development ;-). I might also just stick to C for plain performance reasons and just refactor the code dealing with concurrency. - -Pleace notice that some of the screenshots show the command "ioreplay" instead of "ioriot". That's because the name has changed after taking those. - -# The article - -With I/O Riot IT administrators can load test and optimize the I/O subsystem of Linux-based operating systems. The tool makes it possible to record I/O patterns and replay them at a later time as often as desired. This means bottlenecks can be reproduced and eradicated. - -When storing huge amounts of data, such as more than 200 billion archived emails at Mimecast, it's not only the available storage capacity that matters, but also the data throughput and latency. At the same time, operating costs must be kept as low as possible. The more systems involved, the more important it is to optimize the hardware, the operating system and the applications running on it. - -## Background: Existing Techniques - -Conventional I/O benchmarking: Administrators usually use open source benchmarking tools like IOZone and bonnie++. Available database systems such as Redis and MySQL come with their own benchmarking tools. The common problem with these tools is that they work with prescribed artificial I/O patterns. Although this can test both sequential and randomized data access, the patterns do not correspond to what can be found on production systems. - -Testing by load test environment: Another option is to use a separate load test environment in which, as far as possible, a production environment with all its dependencies is simulated. However, an environment consisting of many microservices is very complex. Microservices are usually managed by different teams, which means extra coordination effort for each load test. Another challenge is to generate the load as authentically as possible so that the patterns correspond to a productive environment. Such a load test environment can only handle as many requests as its weakest link can handle. For example, load generators send many read and write requests to a frontend microservice, whereby the frontend forwards the requests to a backend microservice responsible for storing the data. If the frontend service does not process the requests efficiently enough, the backend service is not well utilized in the first place. As a rule, all microservices are clustered across many servers, which makes everything even more complicated. Under all these conditions it is very difficult to test I/O of separate backend systems. Moreover, for many small and medium-sized companies, a separate load test environment would not be feasible for cost reasons. - -Testing in the production environment: For these reasons, benchmarks are often carried out in the production environment. In order to derive value from this such tests are especially performed during peak hours when systems are under high load. However, testing on production systems is associated with risks and can lead to failure or loss of data without adequate protection. - -## Benchmarking the Email Cloud at Mimecast - -For email archiving, Mimecast uses an internally developed microservice, which is operated directly on Linux-based storage systems. A storage cluster is divided into several replication volumes. Data is always replicated three times across two secure data centers. Customer data is automatically allocated to one or more volumes, depending on throughput, so that all volumes are automatically assigned the same load. Customer data is archived on conventional, but inexpensive hard disks with several terabytes of storage capacity each. I/O benchmarking proved difficult for all the reasons mentioned above. Furthermore, there are no ready-made tools for this purpose in the case of self-developed software. The service operates on many block devices simultaneously, which can make the RAID controller a bottleneck. None of the freely available benchmarking tools can test several block devices at the same time without extra effort. In addition, emails typically consist of many small files. Randomized access to many small files is particularly inefficient. In addition to many software adaptations, the hardware and operating system must also be optimized. - -Mimecast encourages employees to be innovative and pursue their own ideas in the form of an internal competition, Pet Project. The goal of the pet project I/O Riot was to simplify OS and hardware level I/O benchmarking. The first prototype of I/O Riot was awarded an internal roadmap prize in the spring of 2017. A few months later, I/O Riot was used to reduce write latency in the storage clusters by about 50%. The improvement was first verified by I/O replay on a test system and then successively applied to all storage systems. I/O Riot was also used to resolve a production incident caused by disk I/O load. - -## Using I/O Riot - -First, all I/O events are logged to a file on a production system with I/O Riot. It is then copied to a test system where all events are replayed in the same way. The crucial point here is that you can reproduce I/O patterns as they are found on a production system as often as you like on a test system. This results in the possibility of optimizing the set screws on the system after each run. - -### Installation - -I/O Riot was tested under CentOS 7.2 x86_64. For compiling, the GNU C compiler and Systemtap including kernel debug information are required. Other Linux distributions are theoretically compatible but untested. First of all, you should update the systems involved as follows: - -``` -% sudo yum update -``` - -If the kernel is updated, please restart the system. The installation would be done without a restart but this would complicate the installation. The installed kernel version should always correspond to the currently running kernel. You can then install I/O Riot as follows: - -``` -% sudo yum install gcc git systemtap yum-utils kernel-devel-$(uname -r) -% sudo debuginfo-install kernel-$(uname -r) -% git clone https://github.com/mimecast/ioriot -% cd ioriot -% make -% sudo make install -% export PATH=$PATH:/opt/ioriot/bin -``` - -Note: It is not best practice to install any compilers on production systems. For further information please have a look at the enclosed README.md. - -### Recording of I/O events - -All I/O events are kernel related. If a process wants to perform an I/O operation, such as opening a file, it must inform the kernel of this by a system call (short syscall). I/O Riot relies on the Systemtap tool to record I/O syscalls. Systemtap, available for all popular Linux distributions, helps you to take a look at the running kernel in productive environments, which makes it predestined to monitor all I/O-relevant Linux syscalls and log them to a file. Other tools, such as strace, are not an alternative because they slow down the system too much. - -During recording, ioriot acts as a wrapper and executes all relevant Systemtap commands for you. Use the following command to log all events to io.capture: - -``` -% sudo ioriot -c io.capture -``` - -=> ./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure1-ioriot-io-recording.png Screenshot I/O recording - -A Ctrl-C (SIGINT) stops recording prematurely. Otherwise, ioriot terminates itself automatically after 1 hour. Depending on the system load, the output file can grow to several gigabytes. Only metadata is logged, not the read and written data itself. When replaying later, only random data is used. Under certain circumstances, Systemtap may omit some system calls and issue warnings. This is to ensure that Systemtap does not consume too many resources. - -### Test preparation - -Then copy io.capture to a test system. The log also contains all accesses to the pseudo file systems devfs, sysfs and procfs. This makes little sense, which is why you must first generate a cleaned and playable version io.replay from io.capture as follows: - -``` -% sudo ioriot -c io.capture -r io.replay -u $USER -n TESTNAME -``` - -The parameter -n allows you to assign a freely selectable test name. An arbitrary system user under which the test is to be played is specified via paramater -u. - -### Test Initialization - -The test will most likely want to access existing files. These are files the test wants to read but does not create by itself. The existence of these must be ensured before the test. You can do this as follows: - -``` -% sudo ioriot -i io.replay -``` - -To avoid any damage to the running system, ioreplay only works in special directories. The tool creates a separate subdirectory for each file system mount point (e.g. /, /usr/local, /store/00,...) (here: /.ioriot/TESTNAME, /usr/local/.ioriot/TESTNAME, /store/00/.ioriot/TESTNAME,...). By default, the working directory of ioriot is /usr/local/ioriot/TESTNAME. - -=> ./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure2-ioriot-test-preparation.png Screenshot test preparation - -You must re-initialize the environment before each run. Data from previous tests will be moved to a trash directory automatically, which can be finally deleted with "sudo ioriot -P". - -### Replay - -After initialization, you can replay the log with -r. You can use -R to initiate both test initialization and replay in a single command and -S can be used to specify a file in which statistics are written after the test run. - -You can also influence the playback speed: "-s 0" is interpreted as "Playback as fast as possible" and is the default setting. With "-s 1" all operations are performed at original speed. "-s 2" would double the playback speed and "-s 0.5" would halve it. - -=> ./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure3-ioriot-replay.png Screenshot replaying I/O - -As an initial test, for example, you could compare the two Linux I/O schedulers CFQ and Deadline and check which scheduler the test runs the fastest. They run the test separately for each scheduler. The following shell loop iterates through all attached block devices of the system and changes their I/O scheduler to the one specified in variable $new_scheduler (in this case either cfq or deadline). Subsequently, all I/O events from the io.replay protocol are played back. At the end, an output file with statistics is generated: - -``` -% new_scheduler=cfq -% for scheduler in /sys/block/*/queue/scheduler; do - echo $new_scheduler | sudo tee $scheduler -done -% sudo ioriot -R io.replay -S cfq.txt -% new_scheduler=deadline -% for scheduler in /sys/block/*/queue/scheduler; do - echo $new_scheduler | sudo tee $scheduler -done -% sudo ioriot -R io.replay -S deadline.txt -``` - -According to the results, the test could run 940 seconds faster with Deadline Scheduler: - -``` -% cat cfq.txt -Num workers: 4 -hreads per worker: 128 -otal threads: 512 -Highest loadavg: 259.29 -Performed ioops: 218624596 -Average ioops/s: 101544.17 -Time ahead: 1452s -Total time: 2153.00s -% cat deadline.txt -Num workers: 4 -Threads per worker: 128 -Total threads: 512 -Highest loadavg: 342.45 -Performed ioops: 218624596 -Average ioops/s: 180234.62 -Time ahead: 2392s -Total time: 1213.00s -``` - -In any case, you should also set up a time series database, such as Graphite, where the I/O throughput can be plotted. Figures 4 and 5 show the read and write access times of both tests. The break-in makes it clear when the CFQ test ended and the deadline test was started. The reading latency of both tests is similar. Write latency is dramatically improved using the Deadline Scheduler. - -=> ./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure4-ioriot-read-latency.png Graphite visualization of the mean read access times in ms with CFQ and Deadline Scheduler. - -=> ./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure5-ioriot-write-latency.png Graphite visualization of the average write access times in ms with CFQ and Deadline Scheduler. - -You should also take a look at the iostat tool. The iostat screenshot shows the output of iostat -x 10 during a test run. As you can see, a block device is fully loaded with 99% utilization, while all other block devices still have sufficient buffer. This could be an indication of poor data distribution in the storage system and is worth pursuing. It is not uncommon for I/O Riot to reveal software problems. - -=> ./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure6-iostat.png Output of iostat. The block device sdy seems to be almost fully utilized by 99%. - -## I/O Riot is Open Source - -The tool has already proven to be very useful and will continue to be actively developed as time and priority permits. Mimecast intends to be an ongoing contributor to Open Source. You can find I/O Riot at: - -=> https://github.com/mimecast/ioriot - -## Systemtap - -Systemtap is a tool for the instrumentation of the Linux kernel. The tool provides an AWK-like programming language. Programs written in it are compiled from Systemtap to C- and then into a dynamically loadable kernel module. Loaded into the kernel, the program has access to Linux internals. A Systemtap program written for I/O Riot monitors when, with which parameters, at which time, and from which process I/O syscalls take place and their return values. - -For example, the open syscall opens a file and returns the responsible file descriptor. The read and write syscalls can operate on a file descriptor and return the number of read or written bytes. The close syscall closes a given file descriptor. I/O Riot comes with a ready-made Systemtap program, which you have already compiled into a kernel module and installed to /opt/ioriot. In addition to open, read and close, it logs many other I/O-relevant calls. - -=> https://sourceware.org/systemtap/ - -## More refereces - -=> http://www.iozone.org/ IOZone -=> https://www.coker.com.au/bonnie++/ Bonnie++ -=> https://graphiteapp.org Graphite -=> https://en.wikipedia.org/wiki/Memory-mapped_I/O Memory mapped I/O - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure1-ioriot-io-recording.png b/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure1-ioriot-io-recording.png Binary files differdeleted file mode 100644 index 43ac852f..00000000 --- a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure1-ioriot-io-recording.png +++ /dev/null diff --git a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure2-ioriot-test-preparation.png b/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure2-ioriot-test-preparation.png Binary files differdeleted file mode 100644 index 709d7490..00000000 --- a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure2-ioriot-test-preparation.png +++ /dev/null diff --git a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure3-ioriot-replay.png b/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure3-ioriot-replay.png Binary files differdeleted file mode 100644 index 3bd66b6f..00000000 --- a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure3-ioriot-replay.png +++ /dev/null diff --git a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure4-ioriot-read-latency.png b/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure4-ioriot-read-latency.png Binary files differdeleted file mode 100644 index 160b2305..00000000 --- a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure4-ioriot-read-latency.png +++ /dev/null diff --git a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure5-ioriot-write-latency.png b/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure5-ioriot-write-latency.png Binary files differdeleted file mode 100644 index e30efdbb..00000000 --- a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure5-ioriot-write-latency.png +++ /dev/null diff --git a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure6-iostat.png b/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure6-iostat.png Binary files differdeleted file mode 100644 index 0d3fc0d8..00000000 --- a/content/gemtext/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure6-iostat.png +++ /dev/null diff --git a/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi b/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi deleted file mode 100644 index c749705a..00000000 --- a/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi +++ /dev/null @@ -1,108 +0,0 @@ -# DTail - The distributed log tail program - -> Written by Paul Buetow 2021-04-22, last updated 2021-04-26 - -=> ./2021-04-22-dtail-the-distributed-log-tail-program/title.png DTail logo image - -This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal Gemini capsule too. - -=> https://medium.com/mimecast-engineering/dtail-the-distributed-log-tail-program-79b8087904bb Original Mimecast Engineering Blog post at Medium - -Running a large cloud-based service requires monitoring the state of huge numbers of machines, a task for which many standard UNIX tools were not really designed. In this post, I will describe a simple program, DTail, that Mimecast has built and released as Open-Source, which enables us to monitor log files of many servers at once without the costly overhead of a full-blown log management system. - -At Mimecast, we run over 10 thousand server boxes. Most of them host multiple microservices and each of them produces log files. Even with the use of time series databases and monitoring systems, raw application logs are still an important source of information when it comes to analysing, debugging, and troubleshooting services. - -Every engineer familiar with UNIX or a UNIX-like platform (e.g., Linux) is well aware of tail, a command-line program for displaying a text file content on the terminal which is also especially useful for following application or system log files with tail -f logfile. - -Think of DTail as a distributed version of the tail program which is very useful when you have a distributed application running on many servers. DTail is an Open-Source, cross-platform, fairly easy to use, support and maintain log file analysis & statistics gathering tool designed for Engineers and Systems Administrators. It is programmed in Google Go. - -## A Mimecast Pet Project - -DTail got its inspiration from public domain tools available already in this area but it is a blue sky from-scratch development which was first presented at Mimecast’s annual internal Pet Project competition (awarded with a Bronze prize). It has gained popularity since and is one of the most widely deployed DevOps tools at Mimecast (reaching nearly 10k server installations) and many engineers use it on a regular basis. The Open-Source version of DTail is available at: - -=> https://dtail.dev - -Try it out — We would love any feedback. But first, read on… - -## Differentiating from log management systems - -Why not just use a full-blown log management system? There are various Open-Source and commercial log management solutions available on the market you could choose from (e.g. the ELK stack). Most of them store the logs in a centralized location and are fairly complex to set up and operate. Possibly they are also pretty expensive to operate if you have to buy dedicated hardware (or pay fees to your cloud provider) and have to hire support staff for it. - -DTail does not aim to replace any of the log management tools already available but is rather an additional tool crafted especially for ad-hoc debugging and troubleshooting purposes. DTail is cheap to operate as it does not require any dedicated hardware for log storage as it operates directly on the source of the logs. It means that there is a DTail server installed on all server boxes producing logs. This decentralized comes with the direct advantages that there is no introduced delay because the logs are not shipped to a central log storage device. The reduced complexity also makes it more robust against outages. You won’t be able to troubleshoot your distributed application very well if the log management infrastructure isn’t working either. - -=> ./2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif DTail sample session animated gif - -As a downside, you won’t be able to access any logs with DTail when the server is down. Furthermore, a server can store logs only up to a certain capacity as disks will fill up. For the purpose of ad-hoc debugging, these are not typically issues. Usually, it’s the application you want to debug and not the server. And disk space is rarely an issue for bare metal and VM-based systems these days, with sufficient space for several weeks’ worth of log storage being available. DTail also supports reading compressed logs. The currently supported compression algorithms are gzip and zstd. - -## Combining simplicity, security and efficiency - -DTail also has a client component that connects to multiple servers concurrently for log files (or any other text files). - -The DTail client interacts with a DTail server on port TCP/2222 via SSH protocol and does not interact in any way with the system’s SSH server (e.g., OpenSSH Server) which might be running at port TCP/22 already. As a matter of fact, you don’t need a regular SSH server running for DTail at all. There is no support for interactive login shells at TCP/2222 either, as by design that port can only be used for text data streaming. The SSH protocol is used for the public/private key infrastructure and transport encryption only and DTail implements its own protocol on top of SSH for the features provided. There is no need to set up or buy any additional TLS certificates. The port 2222 can be easily reconfigured if you preferred to use a different one. - -The DTail server, which is a single static binary, will not fork an external process. This means that all features are implemented in native Go code (exception: Linux ACL support is implemented in C, but it must be enabled explicitly on compile time) and therefore helping to make it robust, secure, efficient, and easy to deploy. A single client, running on a standard Laptop, can connect to thousands of servers concurrently while still maintaining a small resource footprint. - -Recent log files are very likely still in the file system caches on the servers. Therefore, there tends to be a minimal I/O overhead involved. - -## The DTail family of commands - -Following the UNIX philosophy, DTail includes multiple command-line commands each of them for a different purpose: - -* dserver: The DTail server, the only binary required to be installed on the servers involved. -* dtail: The distributed log tail client for following log files. -* dcat: The distributed cat client for concatenating and displaying text files. -* dgrep: The distributed grep client for searching text files for a regular expression pattern. -* dmap: The distributed map-reduce client for aggregating stats from log files. - -=> ./2021-04-22-dtail-the-distributed-log-tail-program/dgrep.gif DGrep sample session animated gif - -## Usage example - -The use of these commands is almost self-explanatory for a person already used to the standard command line in Unix systems. One of the main goals is to make DTail easy to use. A tool that is too complicated to use under high-pressure scenarios (e.g., during an incident) can be quite detrimental. - -The basic idea is to start one of the clients from the command line and provide a list of servers to connect to with –servers. You also must provide a path of remote (log) files via –files. If you want to process multiple files per server, you could either provide a comma-separated list of file paths or make use of file system globbing (or a combination of both). - -The following example would connect to all DTail servers listed in the serverlist.txt, follow all files with the ending .log and filter for lines containing the string error. You can specify any Go compatible regular expression. In this example we add the case-insensitive flag to the regex: - -``` -dtail –servers serverlist.txt –files ‘/var/log/*.log’ –regex ‘(?i:error)’ -``` - -You usually want to specify a regular expression as a client argument. This will mean that responses are pre-filtered for all matching lines on the server-side and thus sending back only the relevant lines to the client. If your logs are growing very rapidly and the regex is not specific enough there might be the chance that your client is not fast enough to keep up processing all of the responses. This could be due to a network bottleneck or just as simple as a slow terminal emulator displaying the log lines on the client-side. - -A green 100 in the client output before each log line received from the server always indicates that there were no such problems and 100% of all log lines could be displayed on your terminal (have a look at the animated Gifs in this post). If the percentage falls below 100 it means that some of the channels used by the servers to send data to the client are congested and lines were dropped. In this case, the color will change from green to red. The user then could decide to run the same query but with a more specific regex. - -You could also provide a comma-separated list of servers as opposed to a text file. There are many more options you could use. The ones listed here are just the very basic ones. There are more instructions and usage examples on the GitHub page. Also, you can study even more of the available options via the –help switch (some real treasures might be hidden there). - -## Fitting it in - -DTail integrates nicely into the user management of existing infrastructure. It follows normal system permissions and does not open new “holes” on the server which helps to keep security departments happy. The user would not have more or less file read permissions than he would have via a regular SSH login shell. There is a full SSH key, traditional UNIX permissions, and Linux ACL support. There is also a very low resource footprint involved. On average for tailing and searching log files less than 100MB RAM and less than a quarter of a CPU core per participating server are required. Complex map-reduce queries on big data sets will require more resources accordingly. - -## Advanced features - -The features listed here are out of the scope of this blog post but are worthwhile to mention: - -* Distributed map-reduce queries on stats provided in log files with dmap. dmap comes with its own SQL-like aggregation query language. -* Stats streaming with continuous map-reduce queries. The difference to normal queries is that the stats are aggregated over a specified interval only on the newly written log lines. Thus, giving a de-facto live stat view for each interval. -* Server-side scheduled queries on log files. The queries are configured in the DTail server configuration file and scheduled at certain time intervals. Results are written to CSV files. This is useful for generating daily stats from the log files without the need for an interactive client. -* Server-side stats streaming with continuous map-reduce queries. This for example can be used to periodically generate stats from the logs at a configured interval, e.g., log error counts by the minute. These then can be sent to a time-series database (e.g., Graphite) and then plotted in a Grafana dashboard. -* Support for custom extensions. E.g., for different server discovery methods (so you don’t have to rely on plain server lists) and log file formats (so that map-reduce queries can parse more stats from the logs). - -## For the future - -There are various features we want to see in the future. - -* A spartan mode, not printing out any extra information but the raw remote log files would be a nice feature to have. This will make it easier to post-process the data produced by the DTail client with common UNIX tools. (To some degree this is possible already, just disable the ANSI terminal color output of the client with -noColors and pipe the output to another program). -* Tempting would be implementing the dgoawk command, a distributed version of the AWK programming language purely implemented in Go, for advanced text data stream processing capabilities. There are 3rd party libraries available implementing AWK in pure Go which could be used. -* A more complex change would be the support of federated queries. You can connect to thousands of servers from a single client running on a laptop. But does it scale to 100k of servers? Some of the servers could be used as middleware for connecting to even more servers. -* Another aspect is to extend the documentation. Especially the advanced features such as map-reduce query language and how to configure the server-side queries currently do require more documentation. For now, you can read the code, sample config files or just ask the author for that! But this will be certainly addressed in the future. - -## Open Source - -Mimecast highly encourages you to have a look at DTail and submit an issue for any features you would like to see. Have you found a bug? Maybe you just have a question or comment? If you want to go a step further: We would also love to see pull requests for any features or improvements. Either way, if in doubt just contact us via the DTail GitHub page. - -=> https://dtail.dev - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dgrep.gif b/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dgrep.gif Binary files differdeleted file mode 100644 index e2f2ac64..00000000 --- a/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dgrep.gif +++ /dev/null diff --git a/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif b/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif Binary files differdeleted file mode 100644 index 8f6b56bf..00000000 --- a/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif +++ /dev/null diff --git a/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/title.png b/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/title.png Binary files differdeleted file mode 100644 index 4e343c4f..00000000 --- a/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/title.png +++ /dev/null diff --git a/content/gemtext/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi b/content/gemtext/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi deleted file mode 100644 index 9902f40b..00000000 --- a/content/gemtext/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi +++ /dev/null @@ -1,76 +0,0 @@ -# Welcome to the Geminispace - -> Written by Paul Buetow 2021-04-24, last updated 2021-04-30, ASCII Art by Andy Hood - -Have you reached this article already via Gemini? You need a special client for that, web browsers such as Firefox, Chrome, Safari etc. don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is: - -=> gemini://buetow.org - -If you however still use HTTP then you are just surfing the fallback HTML version of this capsule. In that case I suggest reading on what this is all about :-). - -``` - - /\ - / \ - | | - |NASA| - | | - | | - | | - ' ` - |Gemini| - | | - |______| - '-`'-` . - / . \'\ . .' - ''( .'\.' ' .;' -'.;.;' ;'.;' ..;;' AsH - -``` - -## Motivation - -### My urge to revamp my personal website - -For some time I had to urge to revamp my personal website. Not to update the technology and the design of it but to update all the content (+ keep it current) and also to start a small tech blog again. So unconsciously I started to search for a good platform and/or software to do all of that in a KISS (keep it simple & stupid) way. - -### My still great Laptop running hot - -Earlier this year (2021) I noticed that my almost 7 year old but still great Laptop started to become hot and slowed down while surfing the web. Also, the Laptop's fan became quite noisy. This is all due to the additional bloat such as JavaScript, excessive use of CSS, tracking cookies+pixels, ads and so on there was on the website. - -All what I wanted was to read an interesting article but after a big advertising pop-up banner appeared and made everything worse I gave up and closed the browser tab. - -## Discovering the Gemini internet protocol - -Around the same time I discovered a relatively new more lightweight protocol named Gemini which does not support all these CPU intensive features like HTML, JavaScript and CSS do. Also, tracking and ads is not supported by the Gemini protocol. - -The "downside" is that due to the limited capabilities of the Gemini protocol all sites look very old and spartan. But that is not really a downside, that is in fact a design choice people made. It is up to the client software how your capsule looks. For example, you could use a graphical client with nice font renderings and colors to improve the appearance. Or you could just use a very minimalistic command line black-and-white Gemini client. It's your (the user's) choice. - -=> ./2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png Screenshot Amfora Gemini terminal client surfing this site - -Why is there a need for a new protocol? As the modern web is a superset of Gemini, can't we just use simple HTML 1.0? That's a good and valid question. It is not a technical problem but a human problem. We tend to abuse the features once they are available. You can be sure that things stay simple and efficient as long as you are using the Gemini protocol. On the other hand you can't force every website in the modern web to only create plain and simple looking HTML pages. - -## My own Gemini capsule - -As it is very easy to set up and maintain your own Gemini capsule (Gemini server + content composed via the Gemtext markup language) I decided to create my own. What I really like about Gemini is that I can just use my favorite text editor and get typing. I don't need to worry about the style and design of the presence and I also don't have to test anything in ten different web browsers. I can only focus on the content! As a matter of fact, I am using the Vim editor + it's spellchecker + auto word completion functionality to write this. - -## Advantages summarised - -* Supports an alternative to the modern bloated web -* Easy to operate and easy to write content -* No need to worry about various web browser compatibilities -* It's the client's responsibility how the content is designed+presented -* Lightweight (although not as lightweight as the Gopher protocol) -* Supports privacy (no cookies, no request header fingerprinting, TLS encryption) -* Fun to play with (it's a bit geeky yes, but a lot of fun!) - -## Dive into deep Gemini space - -Check out one of the following links for more information about Gemini. For example, you will find a FAQ which explains why the protocol is named "Gemini". Many Gemini capsules are dual hosted via Gemini and HTTP(S), so that people new to Gemini can sneak peek the content with a normal web browser. As a matter of fact, some people go as far as tri-hosting all their content via HTTP(S), Gemini and Gopher. - -=> gemini://gemini.circumlunar.space -=> https://gemini.circumlunar.space - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png b/content/gemtext/gemfeed/2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png Binary files differdeleted file mode 100644 index 093aec79..00000000 --- a/content/gemtext/gemfeed/2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png +++ /dev/null diff --git a/content/gemtext/gemfeed/2021-05-15-buetow.org.sh-one-bash-script-to-rule-it-all.draft.gmi b/content/gemtext/gemfeed/2021-05-15-buetow.org.sh-one-bash-script-to-rule-it-all.draft.gmi deleted file mode 100644 index 858478aa..00000000 --- a/content/gemtext/gemfeed/2021-05-15-buetow.org.sh-one-bash-script-to-rule-it-all.draft.gmi +++ /dev/null @@ -1,183 +0,0 @@ -# buetow.org.sh - One Bash script to rule it all - -> TODO: ADD WRITTEN BY AND CREATED AT BLABLA - -You might have read my previous blog post about entering the Geminispace. - -=> ./2021-04-24-welcome-to-the-geminispace Welcome to the Geminispace - -## Motivation - -Another benefit of using Gemini is that the Gemtext markup language is very easy to parse. As my site is dual hosted (Gemini+HTTP) I could in theory just write a shell script to deal with the conversion from Gemtext to HTML and not to rely on any external tools here. - -So I did exactly that, I wrote a Bash script which does the following: - -- Converts all Gemtext (*.gmi) files to HTML files -- Generates a Gemtext atom.xml feed for my blog posts -- Generates a HTML atom.xml feed of my blog posts - -I could have done all of that with a more powerful language than Bash (such as Perl, Ruby, Go...), but I didn't. The purpose of this exercise was to challenge what I can do with a "simple" Bash script and also to learn new things. - -``` - o .,<>., o - |\/\/\/\/| - '========' - (_ SSSSSSs - )a'`SSSSSs - /_ SSSSSS - .=## SSSSS - .#### SSSSs - ###::::SSSSS - .;:::""""SSS - .:;:' . . \\ - .::/ ' .'| - .::( . | - :::) \ - /\( / - /) ( | - .' \ . ./ / - _-' |\ . | - _..--.. . /"---\ | ` | . | - -=====================,' _ \=(*#(7.#####() | `/_.. , ( - _.-''``';'-''-) ,. \ ' '+/// | .'/ \ ``-.) \ - ,' _.- (( `-' `._\ `` \_/_.' ) /`-._ ) | - ,'\ ,' _.'.`:-. \.-' / <_L )" | - _/ `._,' ,')`; `-'`' | L / / - / `. ,' ,|_/ / \ ( <_-' \ - \ / `./ ' / /,' \ /|` `. | - )\ /`._ ,'`._.-\ |) \' - / `.' )-'.-,' )__) |\ `| - : /`. `.._(--.`':`':/ \ ) \ \ - |::::\ ,'/::;-)) / ( )`. | - ||::::: . .::': :`-( |/ . | - ||::::| . :| |==[]=: . - \ - |||:::| : || : | | /\ ` | - ___ ___ '|;:::| | |' \=[]=| / \ \ -| /_ ||``|||::::: | ; | | | \_.'\_ `-. -: \_``[]--[]|::::'\_;' )-'..`._ .-'\``:: ` . \ - \___.>`''-.||:.__,' SSt |_______`> <_____:::. . . \ _/ - `+a:f:......jrei''' -``` - -## W3C validator says all good -# -All generated HTML and Atom files pass the W3C validation. It is crazy that generating the Atom feed with valid XHTML content body for each blog posts was the most difficult part to implement in Bash. These formats are the reason why I decided to use Gemini as the primary protocol in the first place. However, Ironically I spent a couple of hours to get the XHTML and web Atoom feed working. To be fair, the Atom feed also works with Gemini. - -## Meta files for atom feed generation - -## Not without sed and grep and cut - -Soon I realised that I didn't want to go without a bit of grep and sed and cut. Regular expression matchings and simple string substitution tasks can be done in pure Bash but in my own opinion grep+sed are more powerful and easier to use (as I am used to these anyway). I managed not to use any AWK though. - -### Grepping - -I could use Bash's built-in regular expression matching engine here, but I am used to the grep pattern syntax, that's why I decided to do it this way: -``` -if grep -E -q "$IMAGE_PATTERN" <<< "$link"; then - html::img "$link" "$descr" - return -fi -``` - -### Sed-ing - -Sed comes in very handy for things like fixing HTML block text by replacing the lower than "<" and larger than ">" symbols with their corresponding HTML codes with one single command : - -``` -TODO: UPDATE SNIPPET echo "$line" | sed 's|<|\<|g; s|>|\>|g' -``` - -Sed is also useful in the following example, where the script checks whether the newly generated Atom feed file has changed compared to the previous version or not: - -``` -if ! diff -u <(sed 3d "$atom_file.tmp") <(sed 3d "$atom_file"); then - ... -else - ... -fi -``` - -### Cut-ing - -## Bash Modules for better structure - -I separated the script into different section; you could call them modules. For example, all functions dealing with the Atom feed are prefixed with atomfeed::, all functions dealing with HTML are prefixed with html:: and so on. - -As of writing this the script has the following modules and module functions: - -``` -TODO: UPDATE SNIPPET -❯ grep '::.* ()' buetow.org.sh -assert::equals () { -atom::meta () { -atom::generate () { -html::paragraph () { -html::heading () { -html::quote () { -html::img () { -html::link () { -html::gemini2html () { -html::generate () { -html::test () { -main::help () { -``` - -## Declaring all variables - -Many Bash scripts out in the wild don't have their variables declared, which leads to bad surprises as the default behaviour is that an undeclared variable is automatically a global variable once in use. So the best practise is to always declare a variable with one of the keywords "delcare", "readonly" or "local". - -Whole numbers can also have the option "-i", e.g. "declare -i num=52" and read only variables can be either declared via "readonly" or "rdeclare -r" or "local -r". Function local variables can also be declared with the "local" keyword. - -This is an example from the Atom module, where all variables are local to the function. I also make use of the "assign-then-shift"-pattern which goes like this: "local -r var1=$1; shift; local -r var2=$1; shift". The idea is that you only use "$1" to assign function arguments to named (better readable) local function variables. You will never have to bother about "$2" or above. That's is very useful when you constantly refactor your code and remove or add function arguments. It's something what I picked up from a colleague (a purely Bash wizard) some time ago: - -``` -atomfeed::meta () { - local -r now="$1"; shift - local -r gmi_file_path="$1"; shift - ... -} -``` - -## Unit tests - -Especially the Gemtext to HTML conversion part is an excellent use case for unit testing. There are unit tests for various Gemtext to HTML conversions (e.g. A header, paragraph, link, quote ...). My small unit test framework only consists of the test::assert() function. - -Forces to think creatively and to keep features fairly simple (good things) - -## De-facto templates - -## It's a static website generator - -Generate statically on my laptop and commit all statically generated files to fit. Can also preview locally. - -A lot of bash tricks - -## Config file - -## Learnings from ShellCheck - -ShellSheck: Not happy with all recommentations but most, e.g. read -r, quotes, etc. - -### While-read loops - -Specify -r - -### Warnings about variables not quoted - -### if cmd; then - -## The result(s) - -### Gemtext via Gemini protocol - -=> gemini://buetow.org gemini://buetow.org - The original Gemini capsule -=> gemini://buetow.org/gemfeed/ gemini://buetow.org/gemfeed/ - The Gemfeed -=> gemini://buetow.org/gemfeed/atom.xml gemini://buetow.org/gemfeed/atom.xml - The Atom feed - -### XHTML via HTTP protocol - -=> https://buetow.org https://buetow.org - The original Gemini capsule -=> https://buetow.org/gemfeed/ https://buetow.org/gemfeed/ - The Gemfeed -=> https://buetow.org/gemfeed/atom.xml https://buetow.org/gemfeed/atom.xml - The Atom feed - -TODO: ADD GO BACK LINK diff --git a/content/gemtext/gemfeed/2021-05-16-personal-bash-coding-style-guide.gmi b/content/gemtext/gemfeed/2021-05-16-personal-bash-coding-style-guide.gmi deleted file mode 100644 index 8adb5b6b..00000000 --- a/content/gemtext/gemfeed/2021-05-16-personal-bash-coding-style-guide.gmi +++ /dev/null @@ -1,385 +0,0 @@ -# Personal Bash coding style guide - -``` - .---------------------------. - /,--..---..---..---..---..--. `. - //___||___||___||___||___||___\_| - [j__ ######################## [_| - \============================| - .==| |"""||"""||"""||"""| |"""|| -/======"---""---""---""---"=| =|| -|____ []* ____ | ==|| -// \\ // \\ |===|| hjw -"\__/"---------------"\__/"-+---+' -``` - -> Written by Paul Buetow 2021-05-16 - -Lately, I have been polishing and writing a lot of Bash code. Not that I never wrote a lot of Bash, but now as I also looked through the "Google Shell Style Guide" I thought it is time to also write my own thoughts on that. I agree to that guide in most, but not in all points. - -=> https://google.github.io/styleguide/shellguide.html Google Shell Style Guide - -## My modifications - -These are my personal modifications of the Google Guide. - -### Shebang - -Google recommends using always - -``` -#!/bin/bash -``` - -as the shebang line. But that does not really work on all Unix and Unix like operating systems (e.g. the *BSDs don't have Bash installed to /bin/bash). Better is: - -``` -#!/usr/bin/env bash -``` - -### 2 space soft-tabs indentation - -I know there have been many tab- and soft-tab wars on this planet. Google recommends using 2 space soft-tabs for Bash scripts. - -I personally don't really care if I use 2 or 4 space indentations. I agree however that tabs should not be used. I personally tend to use 4 space soft-tabs as that's currently how my Vim is configured for any programming language. What matters most though is consistency within the same script/project. - -Google also recommends limiting the line length to 80 characters. For some people that seem's to be an ancient habit from the 80's, where all computer terminals couldn't display longer lines. But I think that the 80 character mark is still a good practice at least for shell scripts. For example, I am often writing code on a Microsoft Go Tablet PC (running Linux of course) and it comes in very handy if the lines are not too long due to the relatively small display on the device. - -I hit the 80 character line length quicker with the 4 spaces than with 2 spaces, but that makes me refactor the Bash code more aggressively which is actually a good thing. - -### Breaking long pipes - -Google recommends breaking up long pipes like this: - -``` -# All fits on one line -command1 | command2 - -# Long commands -command1 \ - | command2 \ - | command3 \ - | command4 -``` - -I think there is a better way like the following, which is less noisy. The pipe | already indicates the Bash that another command is expected, thus making the explicit line breaks with \ obsolete: - -``` -# Long commands -command1 | - command2 | - command3 | - command4 -``` - -### Quoting your variables - -Google recommends to always quote your variables. I think generally you should do that only for variables where you are unsure about the content/values of the variables (e.g. content is from an external input source and may contains whitespace or other special characters). In my opinion, the code will become quite noisy when you always quote your variables like this: - -``` -greet () { - local -r greeting="${1}" - local -r name="${2}" - echo "${greeting} ${name}!" -} -``` - -In this particular example I agree that you should quote them as you don't really know what is the input (are there for example whitespace characters?). But if you are sure that you are only using simple bare words then I think that the code looks much cleaner when you do this instead: - -``` -say_hello_to_paul () { - local -r greeting=Hello - local -r name=Paul - echo "$greeting $name!" -} -``` - -You see I also omitted the curly braces { } around the variables. I only use the curly braces around variables when it makes the code either easier/clearer to read or if it is necessary to use them: - -``` -declare FOO=bar -# Curly braces around FOO are necessary -echo "foo${FOO}baz" -``` - -A few more words on always quoting the variables: For the sake of consistency (and for the sake of making ShellCheck happy) I am not against quoting everything I encounter. I personally also think that the larger the Bash script becomes, the more important it becomes to always quote variables. That's because it will be more likely that you might not remember that some of the functions don't work on values with spaces in it for example. It's just that I won't quote everything in every small script I write. - -### Prefer builtin commands over external commands - -Google recommends using the builtin commands over external available commands where possible: - -``` -# Prefer this: -addition=$(( X + Y )) -substitution="${string/#foo/bar}" - -# Instead of this: -addition="$(expr "${X}" + "${Y}")" -substitution="$(echo "${string}" | sed -e 's/^foo/bar/')" -``` - -I don't agree fully here. The external commands (especially sed) are much more sophisticated and powerful than the Bash builtin versions. Sed can do much more than the Bash can ever do natively when it comes to text manipulation (the name "sed" stands for streaming editor after all). - -I prefer to do light text processing with the Bash builtins and more complicated text processing with external programs such as sed, grep, awk, cut and tr. There is however also the case of medium-light text processing where I would want to use external programs too. That is so because I remember using them better than the Bash builtins. The Bash can get quite obscure here (even Perl will be more readable then - Side note: I love Perl). - -Also, you would like to use an external command for floating-point calculation (e.g. bc) instead using the Bash builtins (worth noticing that ZSH supports builtin floating-points). - -I even didn't get started what you can do with Awk (especially GNU Awk), a fully fledged programming language. Tiny Awk snippets tend to be used quite often in Shell scripts without honouring the real power of Awk. But if you did everything in Perl or Awk or another scripting language, then it wouldn't be a Bash script anymore, wouldn't it? ;-) - -## My additions - -### Use of 'yes' and 'no' - -Bash does not support a boolean type. I tend to just use the strings 'yes' and 'no' here. For some time I used 0 for false and 1 for true, but I think that the yes/no strings are easier to read. Yes, the Bash script would need to perform string comparisons on every check, but if performance is important to you, you wouldn't want to use a Bash script anyway, correct? - -``` -declare -r SUGAR_FREE=yes -declare -r I_NEED_THE_BUZZ=no - -buy_soda () { - local -r sugar_free=$1 - - if [[ $sugar_free == yes ]]; then - echo 'Diet Dr. Pepper' - else - echo 'Pepsi Coke' - fi -} - -buy_soda $I_NEED_THE_BUZZ -``` - -### Non-evil alternative to variable assignments via eval - -Google is in the opinion that eval should be avoided. I think so too. They list these examples in their guide: - -``` -# What does this set? -# Did it succeed? In part or whole? -eval $(set_my_variables) - -# What happens if one of the returned values has a space in it? -variable="$(eval some_function)" - -``` - -However, if I want to read variables from another file I don't have to use eval here. I just source the file: - -``` -% cat vars.source.sh -declare foo=bar -declare bar=baz -declare bay=foo - -% bash -c 'source vars.source.sh; echo $foo $bar $baz' -bar baz foo -``` - -And if I want to assign variables dynamically then I could just run an external script and source its output (This is how you could do metaprogramming in Bash without the use of eval - write code which produces code for immediate execution): - -``` -% cat vars.sh -#!/usr/bin/env bash -cat <<END -declare date="$(date)" -declare user=$USER -END - -% bash -c 'source <(./vars.sh); echo "Hello $user, it is $date"' -Hello paul, it is Sat 15 May 19:21:12 BST 2021 -``` - -The downside is that ShellCheck won't be able to follow the dynamic sourcing anymore. - -### Prefer pipes over arrays for list processing - -When I do list processing in Bash, I prefer to use pipes. You can chain then through Bash functions as well which is pretty neat. Usually my list processing scripts are of a structure like this: - -``` -filter_lines () { - echo 'Start filtering lines in a fancy way!' >&2 - grep ... | sed .... -} - -process_lines () { - echo 'Start processing line by line!' >&2 - while read -r line; do - ... do something and produce a result... - echo "$result" - done -} - -# Do some post processing of the data -postprocess_lines () { - echo 'Start removing duplicates!' >&2 - sort -u -} - -genreate_report () { - echo 'My boss wants to have a report!' >&2 - tee outfile.txt - wc -l outfile.txt -} - -main () { - filter_lines | - process_lines | - postprocess_lines | - generate_report -} - -main -``` - -The stdout is always passed as a pipe to the next following stage. The stderr is used for info logging. - -### Assign-then-shift - -I often refactor existing Bash code. That leads me to adding and removing function arguments quite often. It's quite repetitive work changing the $1, $2.... function argument numbers every time you change the order or add/remove possible arguments. - -The solution is to use of the "assign-then-shift"-method, which goes like this: "local -r var1=$1; shift; local -r var2=$1; shift". The idea is that you only use "$1" to assign function arguments to named (better readable) local function variables. You will never have to bother about "$2" or above. That is very useful when you constantly refactor your code and remove or add function arguments. It's something what I picked up from a colleague (a pure Bash wizard) some time ago: - -``` -some_function () { - local -r param_foo="$1"; shift - local -r param_baz="$1"; shift - local -r param_bay="$1"; shift - ... -} -``` - -Want to add a param_baz? Just do this: - -``` -some_function () { - local -r param_foo="$1"; shift - local -r param_bar="$1"; shift - local -r param_baz="$1"; shift - local -r param_bay="$1"; shift - ... -} -``` - -Want to remove param_foo? Nothing easier than that: - -``` -some_function () { - local -r param_bar="$1"; shift - local -r param_baz="$1"; shift - local -r param_bay="$1"; shift - ... -} -``` - -As you can see I didn't need to change any other assignments within the function. Of course you would also need to change the function argument lists at every occasion where the function is invoked - you would do that within the same refactoring session. - -### Paranoid mode - -I call this the paranoid mode. The Bash will stop executing when a command exists with a status not equal to 0: - -``` -set -e -grep -q foo <<< bar -echo Jo -``` - -Here 'Jo' will never be printed out as the grep didn't find any match. It's unrealistic for most scripts to purely run in paranoid mode so there must be a way to add exceptions. Critical Bash scripts of mine tend to look like this: - -``` -#!/usr/bin/env bash - -set -e - -some_function () { - .. some critical code - ... - - set +e - # Grep might fail, but that's OK now - grep .... - local -i ec=$? - set -e - - .. critical code continues ... - if [[ $ec -ne 0 ]]; then - ... - fi - ... -} -``` - -## Learned - -There are also a couple of things I've learned from Googles guide. - -### Unintended lexicographical comparison. - -The following looks like valid Bash code: - -``` -if [[ "${my_var}" > 3 ]]; then - # True for 4, false for 22. - do_something -fi -``` - -... but is probably unintended lexicographical comparison. A correct way would be: - -``` -if (( my_var > 3 )); then - do_something -fi -``` - -or - -``` -if [[ "${my_var}" -gt 3 ]]; then - do_something -fi -``` - -### PIPESTATUS - -To be honest, I have never used the PIPESTATUS variable before. I knew that it's there, but I never bothered to fully understand it how it works until now. - -The PIPESTATUS variable in Bash allows checking of the return code from all parts of a pipe. If it’s only necessary to check success or failure of the whole pipe, then the following is acceptable: - -``` -tar -cf - ./* | ( cd "${dir}" && tar -xf - ) -if (( PIPESTATUS[0] != 0 || PIPESTATUS[1] != 0 )); then - echo "Unable to tar files to ${dir}" >&2 -fi -``` - -However, as PIPESTATUS will be overwritten as soon as you do any other command, if you need to act differently on errors based on where it happened in the pipe, you’ll need to assign PIPESTATUS to another variable immediately after running the command (don’t forget that [ is a command and will wipe out PIPESTATUS). - -``` -tar -cf - ./* | ( cd "${DIR}" && tar -xf - ) -return_codes=( "${PIPESTATUS[@]}" ) -if (( return_codes[0] != 0 )); then - do_something -fi -if (( return_codes[1] != 0 )); then - do_something_else -fi -``` - -## Use common sense and BE CONSISTENT. - -The following 2 paragraphs are completely quoted from the Google guidelines. But they hit the hammer on the head: - -> If you are editing code, take a few minutes to look at the code around you and determine its style. If they use spaces around their if clauses, you should, too. If their comments have little boxes of stars around them, make your comments have little boxes of stars around them too. - -> The point of having style guidelines is to have a common vocabulary of coding so people can concentrate on what you are saying, rather than on how you are saying it. We present global style rules here so people know the vocabulary. But local style is also important. If code you add to a file looks drastically different from the existing code around it, the discontinuity throws readers out of their rhythm when they go to read it. Try to avoid this. - - -## Advanced Bash learning pro tip - -I also highly recommend having a read through the "Advanced Bash-Scripting Guide" (which is not from Google). I use it as the universal Bash reference and learn something new every time I have a look at it. - -=> https://tldp.org/LDP/abs/html/ Advanced Bash-Scripting Guide - -E-Mail me your thoughts at comments@mx.buetow.org! - -=> ../ Go back to the main site diff --git a/content/gemtext/gemfeed/atom.xml b/content/gemtext/gemfeed/atom.xml deleted file mode 100644 index bc95ba61..00000000 --- a/content/gemtext/gemfeed/atom.xml +++ /dev/null @@ -1,2517 +0,0 @@ -<?xml version="1.0" encoding="utf-8"?> -<feed xmlns="http://www.w3.org/2005/Atom"> - <updated>2021-05-18T21:32:49+01:00</updated> - <title>buetow.org feed</title> - <subtitle>Having fun with computers!</subtitle> - <link href="gemini://buetow.org/gemfeed/atom.xml" rel="self" /> - <link href="gemini://buetow.org/" /> - <id>gemini://buetow.org/</id> - <entry> - <title>Personal Bash coding style guide</title> - <link href="gemini://buetow.org/gemfeed/2021-05-16-personal-bash-coding-style-guide.gmi" /> - <id>gemini://buetow.org/gemfeed/2021-05-16-personal-bash-coding-style-guide.gmi</id> - <updated>2021-05-16T14:51:57+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>Lately, I have been polishing and writing a lot of Bash code. Not that I never wrote a lot of Bash, but now as I also looked through the 'Google Shell Style Guide' I thought it is time to also write my own thoughts on that. I agree to that guide in most, but not in all points. . .....to read on please visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>Personal Bash coding style guide</h1> -<pre> - .---------------------------. - /,--..---..---..---..---..--. `. - //___||___||___||___||___||___\_| - [j__ ######################## [_| - \============================| - .==| |"""||"""||"""||"""| |"""|| -/======"---""---""---""---"=| =|| -|____ []* ____ | ==|| -// \\ // \\ |===|| hjw -"\__/"---------------"\__/"-+---+' -</pre> -<p class="quote"><i>Written by Paul Buetow 2021-05-16</i></p> -<p>Lately, I have been polishing and writing a lot of Bash code. Not that I never wrote a lot of Bash, but now as I also looked through the "Google Shell Style Guide" I thought it is time to also write my own thoughts on that. I agree to that guide in most, but not in all points. </p> -<a class="textlink" href="https://google.github.io/styleguide/shellguide.html">Google Shell Style Guide</a><br /> -<h2>My modifications</h2> -<p>These are my personal modifications of the Google Guide.</p> -<h3>Shebang</h3> -<p>Google recommends using always</p> -<pre> -#!/bin/bash -</pre> -<p>as the shebang line. But that does not really work on all Unix and Unix like operating systems (e.g. the *BSDs don't have Bash installed to /bin/bash). Better is:</p> -<pre> -#!/usr/bin/env bash -</pre> -<h3>2 space soft-tabs indentation</h3> -<p>I know there have been many tab- and soft-tab wars on this planet. Google recommends using 2 space soft-tabs for Bash scripts. </p> -<p>I personally don't really care if I use 2 or 4 space indentations. I agree however that tabs should not be used. I personally tend to use 4 space soft-tabs as that's currently how my Vim is configured for any programming language. What matters most though is consistency within the same script/project.</p> -<p>Google also recommends limiting the line length to 80 characters. For some people that seem's to be an ancient habit from the 80's, where all computer terminals couldn't display longer lines. But I think that the 80 character mark is still a good practice at least for shell scripts. For example, I am often writing code on a Microsoft Go Tablet PC (running Linux of course) and it comes in very handy if the lines are not too long due to the relatively small display on the device.</p> -<p>I hit the 80 character line length quicker with the 4 spaces than with 2 spaces, but that makes me refactor the Bash code more aggressively which is actually a good thing. </p> -<h3>Breaking long pipes</h3> -<p>Google recommends breaking up long pipes like this:</p> -<pre> -# All fits on one line -command1 | command2 - -# Long commands -command1 \ - | command2 \ - | command3 \ - | command4 -</pre> -<p>I think there is a better way like the following, which is less noisy. The pipe | already indicates the Bash that another command is expected, thus making the explicit line breaks with \ obsolete:</p> -<pre> -# Long commands -command1 | - command2 | - command3 | - command4 -</pre> -<h3>Quoting your variables</h3> -<p>Google recommends to always quote your variables. I think generally you should do that only for variables where you are unsure about the content/values of the variables (e.g. content is from an external input source and may contains whitespace or other special characters). In my opinion, the code will become quite noisy when you always quote your variables like this:</p> -<pre> -greet () { - local -r greeting="${1}" - local -r name="${2}" - echo "${greeting} ${name}!" -} -</pre> -<p>In this particular example I agree that you should quote them as you don't really know what is the input (are there for example whitespace characters?). But if you are sure that you are only using simple bare words then I think that the code looks much cleaner when you do this instead:</p> -<pre> -say_hello_to_paul () { - local -r greeting=Hello - local -r name=Paul - echo "$greeting $name!" -} -</pre> -<p>You see I also omitted the curly braces { } around the variables. I only use the curly braces around variables when it makes the code either easier/clearer to read or if it is necessary to use them:</p> -<pre> -declare FOO=bar -# Curly braces around FOO are necessary -echo "foo${FOO}baz" -</pre> -<p>A few more words on always quoting the variables: For the sake of consistency (and for the sake of making ShellCheck happy) I am not against quoting everything I encounter. I personally also think that the larger the Bash script becomes, the more important it becomes to always quote variables. That's because it will be more likely that you might not remember that some of the functions don't work on values with spaces in it for example. It's just that I won't quote everything in every small script I write. </p> -<h3>Prefer builtin commands over external commands</h3> -<p>Google recommends using the builtin commands over external available commands where possible:</p> -<pre> -# Prefer this: -addition=$(( X + Y )) -substitution="${string/#foo/bar}" - -# Instead of this: -addition="$(expr "${X}" + "${Y}")" -substitution="$(echo "${string}" | sed -e 's/^foo/bar/')" -</pre> -<p>I don't agree fully here. The external commands (especially sed) are much more sophisticated and powerful than the Bash builtin versions. Sed can do much more than the Bash can ever do natively when it comes to text manipulation (the name "sed" stands for streaming editor after all).</p> -<p>I prefer to do light text processing with the Bash builtins and more complicated text processing with external programs such as sed, grep, awk, cut and tr. There is however also the case of medium-light text processing where I would want to use external programs too. That is so because I remember using them better than the Bash builtins. The Bash can get quite obscure here (even Perl will be more readable then - Side note: I love Perl).</p> -<p>Also, you would like to use an external command for floating-point calculation (e.g. bc) instead using the Bash builtins (worth noticing that ZSH supports builtin floating-points).</p> -<p>I even didn't get started what you can do with Awk (especially GNU Awk), a fully fledged programming language. Tiny Awk snippets tend to be used quite often in Shell scripts without honouring the real power of Awk. But if you did everything in Perl or Awk or another scripting language, then it wouldn't be a Bash script anymore, wouldn't it? ;-)</p> -<h2>My additions</h2> -<h3>Use of 'yes' and 'no'</h3> -<p>Bash does not support a boolean type. I tend to just use the strings 'yes' and 'no' here. For some time I used 0 for false and 1 for true, but I think that the yes/no strings are easier to read. Yes, the Bash script would need to perform string comparisons on every check, but if performance is important to you, you wouldn't want to use a Bash script anyway, correct?</p> -<pre> -declare -r SUGAR_FREE=yes -declare -r I_NEED_THE_BUZZ=no - -buy_soda () { - local -r sugar_free=$1 - - if [[ $sugar_free == yes ]]; then - echo 'Diet Dr. Pepper' - else - echo 'Pepsi Coke' - fi -} - -buy_soda $I_NEED_THE_BUZZ -</pre> -<h3>Non-evil alternative to variable assignments via eval</h3> -<p>Google is in the opinion that eval should be avoided. I think so too. They list these examples in their guide:</p> -<pre> -# What does this set? -# Did it succeed? In part or whole? -eval $(set_my_variables) - -# What happens if one of the returned values has a space in it? -variable="$(eval some_function)" - -</pre> -<p>However, if I want to read variables from another file I don't have to use eval here. I just source the file:</p> -<pre> -% cat vars.source.sh -declare foo=bar -declare bar=baz -declare bay=foo - -% bash -c 'source vars.source.sh; echo $foo $bar $baz' -bar baz foo -</pre> -<p>And if I want to assign variables dynamically then I could just run an external script and source its output (This is how you could do metaprogramming in Bash without the use of eval - write code which produces code for immediate execution):</p> -<pre> -% cat vars.sh -#!/usr/bin/env bash -cat <<END -declare date="$(date)" -declare user=$USER -END - -% bash -c 'source <(./vars.sh); echo "Hello $user, it is $date"' -Hello paul, it is Sat 15 May 19:21:12 BST 2021 -</pre> -<p>The downside is that ShellCheck won't be able to follow the dynamic sourcing anymore.</p> -<h3>Prefer pipes over arrays for list processing</h3> -<p>When I do list processing in Bash, I prefer to use pipes. You can chain then through Bash functions as well which is pretty neat. Usually my list processing scripts are of a structure like this:</p> -<pre> -filter_lines () { - echo 'Start filtering lines in a fancy way!' >&2 - grep ... | sed .... -} - -process_lines () { - echo 'Start processing line by line!' >&2 - while read -r line; do - ... do something and produce a result... - echo "$result" - done -} - -# Do some post processing of the data -postprocess_lines () { - echo 'Start removing duplicates!' >&2 - sort -u -} - -genreate_report () { - echo 'My boss wants to have a report!' >&2 - tee outfile.txt - wc -l outfile.txt -} - -main () { - filter_lines | - process_lines | - postprocess_lines | - generate_report -} - -main -</pre> -<p>The stdout is always passed as a pipe to the next following stage. The stderr is used for info logging.</p> -<h3>Assign-then-shift</h3> -<p>I often refactor existing Bash code. That leads me to adding and removing function arguments quite often. It's quite repetitive work changing the $1, $2.... function argument numbers every time you change the order or add/remove possible arguments.</p> -<p>The solution is to use of the "assign-then-shift"-method, which goes like this: "local -r var1=$1; shift; local -r var2=$1; shift". The idea is that you only use "$1" to assign function arguments to named (better readable) local function variables. You will never have to bother about "$2" or above. That is very useful when you constantly refactor your code and remove or add function arguments. It's something what I picked up from a colleague (a pure Bash wizard) some time ago:</p> -<pre> -some_function () { - local -r param_foo="$1"; shift - local -r param_baz="$1"; shift - local -r param_bay="$1"; shift - ... -} -</pre> -<p>Want to add a param_baz? Just do this:</p> -<pre> -some_function () { - local -r param_foo="$1"; shift - local -r param_bar="$1"; shift - local -r param_baz="$1"; shift - local -r param_bay="$1"; shift - ... -} -</pre> -<p>Want to remove param_foo? Nothing easier than that:</p> -<pre> -some_function () { - local -r param_bar="$1"; shift - local -r param_baz="$1"; shift - local -r param_bay="$1"; shift - ... -} -</pre> -<p>As you can see I didn't need to change any other assignments within the function. Of course you would also need to change the function argument lists at every occasion where the function is invoked - you would do that within the same refactoring session.</p> -<h3>Paranoid mode</h3> -<p>I call this the paranoid mode. The Bash will stop executing when a command exists with a status not equal to 0:</p> -<pre> -set -e -grep -q foo <<< bar -echo Jo -</pre> -<p>Here 'Jo' will never be printed out as the grep didn't find any match. It's unrealistic for most scripts to purely run in paranoid mode so there must be a way to add exceptions. Critical Bash scripts of mine tend to look like this:</p> -<pre> -#!/usr/bin/env bash - -set -e - -some_function () { - .. some critical code - ... - - set +e - # Grep might fail, but that's OK now - grep .... - local -i ec=$? - set -e - - .. critical code continues ... - if [[ $ec -ne 0 ]]; then - ... - fi - ... -} -</pre> -<h2>Learned</h2> -<p>There are also a couple of things I've learned from Googles guide.</p> -<h3>Unintended lexicographical comparison.</h3> -<p>The following looks like valid Bash code:</p> -<pre> -if [[ "${my_var}" > 3 ]]; then - # True for 4, false for 22. - do_something -fi -</pre> -<p>... but is probably unintended lexicographical comparison. A correct way would be:</p> -<pre> -if (( my_var > 3 )); then - do_something -fi -</pre> -<p>or</p> -<pre> -if [[ "${my_var}" -gt 3 ]]; then - do_something -fi -</pre> -<h3>PIPESTATUS</h3> -<p>To be honest, I have never used the PIPESTATUS variable before. I knew that it's there, but I never bothered to fully understand it how it works until now.</p> -<p>The PIPESTATUS variable in Bash allows checking of the return code from all parts of a pipe. If it’s only necessary to check success or failure of the whole pipe, then the following is acceptable:</p> -<pre> -tar -cf - ./* | ( cd "${dir}" && tar -xf - ) -if (( PIPESTATUS[0] != 0 || PIPESTATUS[1] != 0 )); then - echo "Unable to tar files to ${dir}" >&2 -fi -</pre> -<p>However, as PIPESTATUS will be overwritten as soon as you do any other command, if you need to act differently on errors based on where it happened in the pipe, you’ll need to assign PIPESTATUS to another variable immediately after running the command (don’t forget that [ is a command and will wipe out PIPESTATUS).</p> -<pre> -tar -cf - ./* | ( cd "${DIR}" && tar -xf - ) -return_codes=( "${PIPESTATUS[@]}" ) -if (( return_codes[0] != 0 )); then - do_something -fi -if (( return_codes[1] != 0 )); then - do_something_else -fi -</pre> -<h2>Use common sense and BE CONSISTENT.</h2> -<p>The following 2 paragraphs are completely quoted from the Google guidelines. But they hit the hammer on the head:</p> -<p class="quote"><i>If you are editing code, take a few minutes to look at the code around you and determine its style. If they use spaces around their if clauses, you should, too. If their comments have little boxes of stars around them, make your comments have little boxes of stars around them too.</i></p> -<p class="quote"><i>The point of having style guidelines is to have a common vocabulary of coding so people can concentrate on what you are saying, rather than on how you are saying it. We present global style rules here so people know the vocabulary. But local style is also important. If code you add to a file looks drastically different from the existing code around it, the discontinuity throws readers out of their rhythm when they go to read it. Try to avoid this.</i></p> -<h2>Advanced Bash learning pro tip</h2> -<p>I also highly recommend having a read through the "Advanced Bash-Scripting Guide" (which is not from Google). I use it as the universal Bash reference and learn something new every time I have a look at it.</p> -<a class="textlink" href="https://tldp.org/LDP/abs/html/">Advanced Bash-Scripting Guide</a><br /> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>Welcome to the Geminispace</title> - <link href="gemini://buetow.org/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi" /> - <id>gemini://buetow.org/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi</id> - <updated>2021-04-24T19:28:41+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>Have you reached this article already via Gemini? You need a special client for that, web browsers such as Firefox, Chrome, Safari etc. don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is: ... to read on visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>Welcome to the Geminispace</h1> -<p class="quote"><i>Written by Paul Buetow 2021-04-24, last updated 2021-04-30, ASCII Art by Andy Hood</i></p> -<p>Have you reached this article already via Gemini? You need a special client for that, web browsers such as Firefox, Chrome, Safari etc. don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is:</p> -<a class="textlink" href="gemini://buetow.org">gemini://buetow.org</a><br /> -<p>If you however still use HTTP then you are just surfing the fallback HTML version of this capsule. In that case I suggest reading on what this is all about :-).</p> -<pre> - - /\ - / \ - | | - |NASA| - | | - | | - | | - ' ` - |Gemini| - | | - |______| - '-`'-` . - / . \'\ . .' - ''( .'\.' ' .;' -'.;.;' ;'.;' ..;;' AsH - -</pre> -<h2>Motivation</h2> -<h3>My urge to revamp my personal website</h3> -<p>For some time I had to urge to revamp my personal website. Not to update the technology and the design of it but to update all the content (+ keep it current) and also to start a small tech blog again. So unconsciously I started to search for a good platform and/or software to do all of that in a KISS (keep it simple & stupid) way.</p> -<h3>My still great Laptop running hot</h3> -<p>Earlier this year (2021) I noticed that my almost 7 year old but still great Laptop started to become hot and slowed down while surfing the web. Also, the Laptop's fan became quite noisy. This is all due to the additional bloat such as JavaScript, excessive use of CSS, tracking cookies+pixels, ads and so on there was on the website. </p> -<p>All what I wanted was to read an interesting article but after a big advertising pop-up banner appeared and made everything worse I gave up and closed the browser tab.</p> -<h2>Discovering the Gemini internet protocol</h2> -<p>Around the same time I discovered a relatively new more lightweight protocol named Gemini which does not support all these CPU intensive features like HTML, JavaScript and CSS do. Also, tracking and ads is not supported by the Gemini protocol.</p> -<p>The "downside" is that due to the limited capabilities of the Gemini protocol all sites look very old and spartan. But that is not really a downside, that is in fact a design choice people made. It is up to the client software how your capsule looks. For example, you could use a graphical client with nice font renderings and colors to improve the appearance. Or you could just use a very minimalistic command line black-and-white Gemini client. It's your (the user's) choice.</p> -<i>Screenshot Amfora Gemini terminal client surfing this site:</i><a href="https://buetow.org/gemfeed/2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png"><img alt="Screenshot Amfora Gemini terminal client surfing this site" title="Screenshot Amfora Gemini terminal client surfing this site" src="https://buetow.org/gemfeed/2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png" /></a><br /> -<p>Why is there a need for a new protocol? As the modern web is a superset of Gemini, can't we just use simple HTML 1.0? That's a good and valid question. It is not a technical problem but a human problem. We tend to abuse the features once they are available. You can be sure that things stay simple and efficient as long as you are using the Gemini protocol. On the other hand you can't force every website in the modern web to only create plain and simple looking HTML pages.</p> -<h2>My own Gemini capsule</h2> -<p>As it is very easy to set up and maintain your own Gemini capsule (Gemini server + content composed via the Gemtext markup language) I decided to create my own. What I really like about Gemini is that I can just use my favorite text editor and get typing. I don't need to worry about the style and design of the presence and I also don't have to test anything in ten different web browsers. I can only focus on the content! As a matter of fact, I am using the Vim editor + it's spellchecker + auto word completion functionality to write this. </p> -<h2>Advantages summarised</h2> -<ul> -<li>Supports an alternative to the modern bloated web</li> -<li>Easy to operate and easy to write content</li> -<li>No need to worry about various web browser compatibilities</li> -<li>It's the client's responsibility how the content is designed+presented</li> -<li>Lightweight (although not as lightweight as the Gopher protocol)</li> -<li>Supports privacy (no cookies, no request header fingerprinting, TLS encryption)</li> -<li>Fun to play with (it's a bit geeky yes, but a lot of fun!)</li> -</ul> -<h2>Dive into deep Gemini space</h2> -<p>Check out one of the following links for more information about Gemini. For example, you will find a FAQ which explains why the protocol is named "Gemini". Many Gemini capsules are dual hosted via Gemini and HTTP(S), so that people new to Gemini can sneak peek the content with a normal web browser. As a matter of fact, some people go as far as tri-hosting all their content via HTTP(S), Gemini and Gopher.</p> -<a class="textlink" href="gemini://gemini.circumlunar.space">gemini://gemini.circumlunar.space</a><br /> -<a class="textlink" href="https://gemini.circumlunar.space">https://gemini.circumlunar.space</a><br /> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>DTail - The distributed log tail program</title> - <link href="gemini://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi" /> - <id>gemini://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi</id> - <updated>2021-04-22T19:28:41+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal Gemini capsule too. ...to read on visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>DTail - The distributed log tail program</h1> -<p class="quote"><i>Written by Paul Buetow 2021-04-22, last updated 2021-04-26</i></p> -<i>DTail logo image:</i><a href="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/title.png"><img alt="DTail logo image" title="DTail logo image" src="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/title.png" /></a><br /> -<p>This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal Gemini capsule too.</p> -<a class="textlink" href="https://medium.com/mimecast-engineering/dtail-the-distributed-log-tail-program-79b8087904bb">Original Mimecast Engineering Blog post at Medium</a><br /> -<p>Running a large cloud-based service requires monitoring the state of huge numbers of machines, a task for which many standard UNIX tools were not really designed. In this post, I will describe a simple program, DTail, that Mimecast has built and released as Open-Source, which enables us to monitor log files of many servers at once without the costly overhead of a full-blown log management system.</p> -<p>At Mimecast, we run over 10 thousand server boxes. Most of them host multiple microservices and each of them produces log files. Even with the use of time series databases and monitoring systems, raw application logs are still an important source of information when it comes to analysing, debugging, and troubleshooting services.</p> -<p>Every engineer familiar with UNIX or a UNIX-like platform (e.g., Linux) is well aware of tail, a command-line program for displaying a text file content on the terminal which is also especially useful for following application or system log files with tail -f logfile.</p> -<p>Think of DTail as a distributed version of the tail program which is very useful when you have a distributed application running on many servers. DTail is an Open-Source, cross-platform, fairly easy to use, support and maintain log file analysis & statistics gathering tool designed for Engineers and Systems Administrators. It is programmed in Google Go.</p> -<h2>A Mimecast Pet Project</h2> -<p>DTail got its inspiration from public domain tools available already in this area but it is a blue sky from-scratch development which was first presented at Mimecast’s annual internal Pet Project competition (awarded with a Bronze prize). It has gained popularity since and is one of the most widely deployed DevOps tools at Mimecast (reaching nearly 10k server installations) and many engineers use it on a regular basis. The Open-Source version of DTail is available at:</p> -<a class="textlink" href="https://dtail.dev">https://dtail.dev</a><br /> -<p>Try it out — We would love any feedback. But first, read on…</p> -<h2>Differentiating from log management systems</h2> -<p>Why not just use a full-blown log management system? There are various Open-Source and commercial log management solutions available on the market you could choose from (e.g. the ELK stack). Most of them store the logs in a centralized location and are fairly complex to set up and operate. Possibly they are also pretty expensive to operate if you have to buy dedicated hardware (or pay fees to your cloud provider) and have to hire support staff for it.</p> -<p>DTail does not aim to replace any of the log management tools already available but is rather an additional tool crafted especially for ad-hoc debugging and troubleshooting purposes. DTail is cheap to operate as it does not require any dedicated hardware for log storage as it operates directly on the source of the logs. It means that there is a DTail server installed on all server boxes producing logs. This decentralized comes with the direct advantages that there is no introduced delay because the logs are not shipped to a central log storage device. The reduced complexity also makes it more robust against outages. You won’t be able to troubleshoot your distributed application very well if the log management infrastructure isn’t working either.</p> -<i>DTail sample session animated gif:</i><a href="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif"><img alt="DTail sample session animated gif" title="DTail sample session animated gif" src="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif" /></a><br /> -<p>As a downside, you won’t be able to access any logs with DTail when the server is down. Furthermore, a server can store logs only up to a certain capacity as disks will fill up. For the purpose of ad-hoc debugging, these are not typically issues. Usually, it’s the application you want to debug and not the server. And disk space is rarely an issue for bare metal and VM-based systems these days, with sufficient space for several weeks’ worth of log storage being available. DTail also supports reading compressed logs. The currently supported compression algorithms are gzip and zstd.</p> -<h2>Combining simplicity, security and efficiency</h2> -<p>DTail also has a client component that connects to multiple servers concurrently for log files (or any other text files).</p> -<p>The DTail client interacts with a DTail server on port TCP/2222 via SSH protocol and does not interact in any way with the system’s SSH server (e.g., OpenSSH Server) which might be running at port TCP/22 already. As a matter of fact, you don’t need a regular SSH server running for DTail at all. There is no support for interactive login shells at TCP/2222 either, as by design that port can only be used for text data streaming. The SSH protocol is used for the public/private key infrastructure and transport encryption only and DTail implements its own protocol on top of SSH for the features provided. There is no need to set up or buy any additional TLS certificates. The port 2222 can be easily reconfigured if you preferred to use a different one.</p> -<p>The DTail server, which is a single static binary, will not fork an external process. This means that all features are implemented in native Go code (exception: Linux ACL support is implemented in C, but it must be enabled explicitly on compile time) and therefore helping to make it robust, secure, efficient, and easy to deploy. A single client, running on a standard Laptop, can connect to thousands of servers concurrently while still maintaining a small resource footprint.</p> -<p>Recent log files are very likely still in the file system caches on the servers. Therefore, there tends to be a minimal I/O overhead involved.</p> -<h2>The DTail family of commands</h2> -<p>Following the UNIX philosophy, DTail includes multiple command-line commands each of them for a different purpose:</p> -<ul> -<li>dserver: The DTail server, the only binary required to be installed on the servers involved.</li> -<li>dtail: The distributed log tail client for following log files.</li> -<li>dcat: The distributed cat client for concatenating and displaying text files.</li> -<li>dgrep: The distributed grep client for searching text files for a regular expression pattern.</li> -<li>dmap: The distributed map-reduce client for aggregating stats from log files.</li> -</ul> -<i>DGrep sample session animated gif:</i><a href="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dgrep.gif"><img alt="DGrep sample session animated gif" title="DGrep sample session animated gif" src="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dgrep.gif" /></a><br /> -<h2>Usage example</h2> -<p>The use of these commands is almost self-explanatory for a person already used to the standard command line in Unix systems. One of the main goals is to make DTail easy to use. A tool that is too complicated to use under high-pressure scenarios (e.g., during an incident) can be quite detrimental.</p> -<p>The basic idea is to start one of the clients from the command line and provide a list of servers to connect to with –servers. You also must provide a path of remote (log) files via –files. If you want to process multiple files per server, you could either provide a comma-separated list of file paths or make use of file system globbing (or a combination of both).</p> -<p>The following example would connect to all DTail servers listed in the serverlist.txt, follow all files with the ending .log and filter for lines containing the string error. You can specify any Go compatible regular expression. In this example we add the case-insensitive flag to the regex:</p> -<pre> -dtail –servers serverlist.txt –files ‘/var/log/*.log’ –regex ‘(?i:error)’ -</pre> -<p>You usually want to specify a regular expression as a client argument. This will mean that responses are pre-filtered for all matching lines on the server-side and thus sending back only the relevant lines to the client. If your logs are growing very rapidly and the regex is not specific enough there might be the chance that your client is not fast enough to keep up processing all of the responses. This could be due to a network bottleneck or just as simple as a slow terminal emulator displaying the log lines on the client-side.</p> -<p>A green 100 in the client output before each log line received from the server always indicates that there were no such problems and 100% of all log lines could be displayed on your terminal (have a look at the animated Gifs in this post). If the percentage falls below 100 it means that some of the channels used by the servers to send data to the client are congested and lines were dropped. In this case, the color will change from green to red. The user then could decide to run the same query but with a more specific regex.</p> -<p>You could also provide a comma-separated list of servers as opposed to a text file. There are many more options you could use. The ones listed here are just the very basic ones. There are more instructions and usage examples on the GitHub page. Also, you can study even more of the available options via the –help switch (some real treasures might be hidden there).</p> -<h2>Fitting it in</h2> -<p>DTail integrates nicely into the user management of existing infrastructure. It follows normal system permissions and does not open new “holes” on the server which helps to keep security departments happy. The user would not have more or less file read permissions than he would have via a regular SSH login shell. There is a full SSH key, traditional UNIX permissions, and Linux ACL support. There is also a very low resource footprint involved. On average for tailing and searching log files less than 100MB RAM and less than a quarter of a CPU core per participating server are required. Complex map-reduce queries on big data sets will require more resources accordingly.</p> -<h2>Advanced features</h2> -<p>The features listed here are out of the scope of this blog post but are worthwhile to mention:</p> -<ul> -<li>Distributed map-reduce queries on stats provided in log files with dmap. dmap comes with its own SQL-like aggregation query language.</li> -<li>Stats streaming with continuous map-reduce queries. The difference to normal queries is that the stats are aggregated over a specified interval only on the newly written log lines. Thus, giving a de-facto live stat view for each interval.</li> -<li>Server-side scheduled queries on log files. The queries are configured in the DTail server configuration file and scheduled at certain time intervals. Results are written to CSV files. This is useful for generating daily stats from the log files without the need for an interactive client.</li> -<li>Server-side stats streaming with continuous map-reduce queries. This for example can be used to periodically generate stats from the logs at a configured interval, e.g., log error counts by the minute. These then can be sent to a time-series database (e.g., Graphite) and then plotted in a Grafana dashboard.</li> -<li>Support for custom extensions. E.g., for different server discovery methods (so you don’t have to rely on plain server lists) and log file formats (so that map-reduce queries can parse more stats from the logs).</li> -</ul> -<h2>For the future</h2> -<p>There are various features we want to see in the future.</p> -<ul> -<li>A spartan mode, not printing out any extra information but the raw remote log files would be a nice feature to have. This will make it easier to post-process the data produced by the DTail client with common UNIX tools. (To some degree this is possible already, just disable the ANSI terminal color output of the client with -noColors and pipe the output to another program).</li> -<li>Tempting would be implementing the dgoawk command, a distributed version of the AWK programming language purely implemented in Go, for advanced text data stream processing capabilities. There are 3rd party libraries available implementing AWK in pure Go which could be used.</li> -<li>A more complex change would be the support of federated queries. You can connect to thousands of servers from a single client running on a laptop. But does it scale to 100k of servers? Some of the servers could be used as middleware for connecting to even more servers.</li> -<li>Another aspect is to extend the documentation. Especially the advanced features such as map-reduce query language and how to configure the server-side queries currently do require more documentation. For now, you can read the code, sample config files or just ask the author for that! But this will be certainly addressed in the future.</li> -</ul> -<h2>Open Source</h2> -<p>Mimecast highly encourages you to have a look at DTail and submit an issue for any features you would like to see. Have you found a bug? Maybe you just have a question or comment? If you want to go a step further: We would also love to see pull requests for any features or improvements. Either way, if in doubt just contact us via the DTail GitHub page.</p> -<a class="textlink" href="https://dtail.dev">https://dtail.dev</a><br /> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>Realistic load testing with I/O Riot for Linux</title> - <link href="gemini://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux.gmi" /> - <id>gemini://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux.gmi</id> - <updated>2018-06-01T14:50:29+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>This text first was published in the german IT-Administrator computer Magazine. 3 years have passed since and I decided to publish it on my blog too. . .....to read on please visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>Realistic load testing with I/O Riot for Linux</h1> -<pre> - .---. - / \ - \.@-@./ - /`\_/`\ - // _ \\ - | \ )|_ - /`\_`> <_/ \ -jgs\__/'---'\__/ -</pre> -<p class="quote"><i>Written by Paul Buetow 2018-06-01, last updated 2021-05-08</i></p> -<h2>Foreword</h2> -<p>This text first was published in the german IT-Administrator computer Magazine. 3 years have passed since and I decided to publish it on my blog too. </p> -<a class="textlink" href="https://www.admin-magazin.de/Das-Heft/2018/06/Realistische-Lasttests-mit-I-O-Riot">https://www.admin-magazin.de/Das-Heft/2018/06/Realistische-Lasttests-mit-I-O-Riot</a><br /> -<p>I havn't worked on I/O Riot for some time now, but all what is written here is still valid. I am still using I/O Riot to debug I/O issues and pattern once in a while, so by all means the tool is not obsolete yet. The tool even helped to resolve a major production incident at work caused by disk I/O.</p> -<p>I am eagerly looking forward to revamp I/O Riot so that it uses the new BPF Linux capabilities instead of plain old Systemtap (or alternatively: Newer versions of Systemtap can also use BPF as the backend I have learned). Also, when I wrote I/O Riot initially, I didn't have any experience with the Go programming language yet and therefore I wrote it in C. Once it gets revamped I might consider using Go instead of C as it would spare me from many segmentation faults and headaches during development ;-). I might also just stick to C for plain performance reasons and just refactor the code dealing with concurrency.</p> -<p>Pleace notice that some of the screenshots show the command "ioreplay" instead of "ioriot". That's because the name has changed after taking those.</p> -<h1>The article</h1> -<p>With I/O Riot IT administrators can load test and optimize the I/O subsystem of Linux-based operating systems. The tool makes it possible to record I/O patterns and replay them at a later time as often as desired. This means bottlenecks can be reproduced and eradicated. </p> -<p>When storing huge amounts of data, such as more than 200 billion archived emails at Mimecast, it's not only the available storage capacity that matters, but also the data throughput and latency. At the same time, operating costs must be kept as low as possible. The more systems involved, the more important it is to optimize the hardware, the operating system and the applications running on it.</p> -<h2>Background: Existing Techniques</h2> -<p>Conventional I/O benchmarking: Administrators usually use open source benchmarking tools like IOZone and bonnie++. Available database systems such as Redis and MySQL come with their own benchmarking tools. The common problem with these tools is that they work with prescribed artificial I/O patterns. Although this can test both sequential and randomized data access, the patterns do not correspond to what can be found on production systems.</p> -<p>Testing by load test environment: Another option is to use a separate load test environment in which, as far as possible, a production environment with all its dependencies is simulated. However, an environment consisting of many microservices is very complex. Microservices are usually managed by different teams, which means extra coordination effort for each load test. Another challenge is to generate the load as authentically as possible so that the patterns correspond to a productive environment. Such a load test environment can only handle as many requests as its weakest link can handle. For example, load generators send many read and write requests to a frontend microservice, whereby the frontend forwards the requests to a backend microservice responsible for storing the data. If the frontend service does not process the requests efficiently enough, the backend service is not well utilized in the first place. As a rule, all microservices are clustered across many servers, which makes everything even more complicated. Under all these conditions it is very difficult to test I/O of separate backend systems. Moreover, for many small and medium-sized companies, a separate load test environment would not be feasible for cost reasons.</p> -<p>Testing in the production environment: For these reasons, benchmarks are often carried out in the production environment. In order to derive value from this such tests are especially performed during peak hours when systems are under high load. However, testing on production systems is associated with risks and can lead to failure or loss of data without adequate protection.</p> -<h2>Benchmarking the Email Cloud at Mimecast</h2> -<p>For email archiving, Mimecast uses an internally developed microservice, which is operated directly on Linux-based storage systems. A storage cluster is divided into several replication volumes. Data is always replicated three times across two secure data centers. Customer data is automatically allocated to one or more volumes, depending on throughput, so that all volumes are automatically assigned the same load. Customer data is archived on conventional, but inexpensive hard disks with several terabytes of storage capacity each. I/O benchmarking proved difficult for all the reasons mentioned above. Furthermore, there are no ready-made tools for this purpose in the case of self-developed software. The service operates on many block devices simultaneously, which can make the RAID controller a bottleneck. None of the freely available benchmarking tools can test several block devices at the same time without extra effort. In addition, emails typically consist of many small files. Randomized access to many small files is particularly inefficient. In addition to many software adaptations, the hardware and operating system must also be optimized.</p> -<p>Mimecast encourages employees to be innovative and pursue their own ideas in the form of an internal competition, Pet Project. The goal of the pet project I/O Riot was to simplify OS and hardware level I/O benchmarking. The first prototype of I/O Riot was awarded an internal roadmap prize in the spring of 2017. A few months later, I/O Riot was used to reduce write latency in the storage clusters by about 50%. The improvement was first verified by I/O replay on a test system and then successively applied to all storage systems. I/O Riot was also used to resolve a production incident caused by disk I/O load.</p> -<h2>Using I/O Riot</h2> -<p>First, all I/O events are logged to a file on a production system with I/O Riot. It is then copied to a test system where all events are replayed in the same way. The crucial point here is that you can reproduce I/O patterns as they are found on a production system as often as you like on a test system. This results in the possibility of optimizing the set screws on the system after each run.</p> -<h3>Installation</h3> -<p>I/O Riot was tested under CentOS 7.2 x86_64. For compiling, the GNU C compiler and Systemtap including kernel debug information are required. Other Linux distributions are theoretically compatible but untested. First of all, you should update the systems involved as follows:</p> -<pre> -% sudo yum update -</pre> -<p>If the kernel is updated, please restart the system. The installation would be done without a restart but this would complicate the installation. The installed kernel version should always correspond to the currently running kernel. You can then install I/O Riot as follows:</p> -<pre> -% sudo yum install gcc git systemtap yum-utils kernel-devel-$(uname -r) -% sudo debuginfo-install kernel-$(uname -r) -% git clone https://github.com/mimecast/ioriot -% cd ioriot -% make -% sudo make install -% export PATH=$PATH:/opt/ioriot/bin -</pre> -<p>Note: It is not best practice to install any compilers on production systems. For further information please have a look at the enclosed README.md.</p> -<h3>Recording of I/O events</h3> -<p>All I/O events are kernel related. If a process wants to perform an I/O operation, such as opening a file, it must inform the kernel of this by a system call (short syscall). I/O Riot relies on the Systemtap tool to record I/O syscalls. Systemtap, available for all popular Linux distributions, helps you to take a look at the running kernel in productive environments, which makes it predestined to monitor all I/O-relevant Linux syscalls and log them to a file. Other tools, such as strace, are not an alternative because they slow down the system too much.</p> -<p>During recording, ioriot acts as a wrapper and executes all relevant Systemtap commands for you. Use the following command to log all events to io.capture:</p> -<pre> -% sudo ioriot -c io.capture -</pre> -<i>Screenshot I/O recording:</i><a href="https://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure1-ioriot-io-recording.png"><img alt="Screenshot I/O recording" title="Screenshot I/O recording" src="https://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure1-ioriot-io-recording.png" /></a><br /> -<p>A Ctrl-C (SIGINT) stops recording prematurely. Otherwise, ioriot terminates itself automatically after 1 hour. Depending on the system load, the output file can grow to several gigabytes. Only metadata is logged, not the read and written data itself. When replaying later, only random data is used. Under certain circumstances, Systemtap may omit some system calls and issue warnings. This is to ensure that Systemtap does not consume too many resources.</p> -<h3>Test preparation</h3> -<p>Then copy io.capture to a test system. The log also contains all accesses to the pseudo file systems devfs, sysfs and procfs. This makes little sense, which is why you must first generate a cleaned and playable version io.replay from io.capture as follows:</p> -<pre> -% sudo ioriot -c io.capture -r io.replay -u $USER -n TESTNAME -</pre> -<p>The parameter -n allows you to assign a freely selectable test name. An arbitrary system user under which the test is to be played is specified via paramater -u.</p> -<h3>Test Initialization</h3> -<p>The test will most likely want to access existing files. These are files the test wants to read but does not create by itself. The existence of these must be ensured before the test. You can do this as follows:</p> -<pre> -% sudo ioriot -i io.replay -</pre> -<p>To avoid any damage to the running system, ioreplay only works in special directories. The tool creates a separate subdirectory for each file system mount point (e.g. /, /usr/local, /store/00,...) (here: /.ioriot/TESTNAME, /usr/local/.ioriot/TESTNAME, /store/00/.ioriot/TESTNAME,...). By default, the working directory of ioriot is /usr/local/ioriot/TESTNAME.</p> -<i>Screenshot test preparation:</i><a href="https://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure2-ioriot-test-preparation.png"><img alt="Screenshot test preparation" title="Screenshot test preparation" src="https://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure2-ioriot-test-preparation.png" /></a><br /> -<p>You must re-initialize the environment before each run. Data from previous tests will be moved to a trash directory automatically, which can be finally deleted with "sudo ioriot -P".</p> -<h3>Replay</h3> -<p>After initialization, you can replay the log with -r. You can use -R to initiate both test initialization and replay in a single command and -S can be used to specify a file in which statistics are written after the test run.</p> -<p>You can also influence the playback speed: "-s 0" is interpreted as "Playback as fast as possible" and is the default setting. With "-s 1" all operations are performed at original speed. "-s 2" would double the playback speed and "-s 0.5" would halve it.</p> -<i>Screenshot replaying I/O:</i><a href="https://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure3-ioriot-replay.png"><img alt="Screenshot replaying I/O" title="Screenshot replaying I/O" src="https://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure3-ioriot-replay.png" /></a><br /> -<p>As an initial test, for example, you could compare the two Linux I/O schedulers CFQ and Deadline and check which scheduler the test runs the fastest. They run the test separately for each scheduler. The following shell loop iterates through all attached block devices of the system and changes their I/O scheduler to the one specified in variable $new_scheduler (in this case either cfq or deadline). Subsequently, all I/O events from the io.replay protocol are played back. At the end, an output file with statistics is generated:</p> -<pre> -% new_scheduler=cfq -% for scheduler in /sys/block/*/queue/scheduler; do - echo $new_scheduler | sudo tee $scheduler -done -% sudo ioriot -R io.replay -S cfq.txt -% new_scheduler=deadline -% for scheduler in /sys/block/*/queue/scheduler; do - echo $new_scheduler | sudo tee $scheduler -done -% sudo ioriot -R io.replay -S deadline.txt -</pre> -<p>According to the results, the test could run 940 seconds faster with Deadline Scheduler:</p> -<pre> -% cat cfq.txt -Num workers: 4 -hreads per worker: 128 -otal threads: 512 -Highest loadavg: 259.29 -Performed ioops: 218624596 -Average ioops/s: 101544.17 -Time ahead: 1452s -Total time: 2153.00s -% cat deadline.txt -Num workers: 4 -Threads per worker: 128 -Total threads: 512 -Highest loadavg: 342.45 -Performed ioops: 218624596 -Average ioops/s: 180234.62 -Time ahead: 2392s -Total time: 1213.00s -</pre> -<p>In any case, you should also set up a time series database, such as Graphite, where the I/O throughput can be plotted. Figures 4 and 5 show the read and write access times of both tests. The break-in makes it clear when the CFQ test ended and the deadline test was started. The reading latency of both tests is similar. Write latency is dramatically improved using the Deadline Scheduler.</p> -<i>Graphite visualization of the mean read access times in ms with CFQ and Deadline Scheduler.:</i><a href="https://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure4-ioriot-read-latency.png"><img alt="Graphite visualization of the mean read access times in ms with CFQ and Deadline Scheduler." title="Graphite visualization of the mean read access times in ms with CFQ and Deadline Scheduler." src="https://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure4-ioriot-read-latency.png" /></a><br /> -<i>Graphite visualization of the average write access times in ms with CFQ and Deadline Scheduler.:</i><a href="https://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure5-ioriot-write-latency.png"><img alt="Graphite visualization of the average write access times in ms with CFQ and Deadline Scheduler." title="Graphite visualization of the average write access times in ms with CFQ and Deadline Scheduler." src="https://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure5-ioriot-write-latency.png" /></a><br /> -<p>You should also take a look at the iostat tool. The iostat screenshot shows the output of iostat -x 10 during a test run. As you can see, a block device is fully loaded with 99% utilization, while all other block devices still have sufficient buffer. This could be an indication of poor data distribution in the storage system and is worth pursuing. It is not uncommon for I/O Riot to reveal software problems.</p> -<i>Output of iostat. The block device sdy seems to be almost fully utilized by 99%.:</i><a href="https://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure6-iostat.png"><img alt="Output of iostat. The block device sdy seems to be almost fully utilized by 99%." title="Output of iostat. The block device sdy seems to be almost fully utilized by 99%." src="https://buetow.org/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure6-iostat.png" /></a><br /> -<h2>I/O Riot is Open Source</h2> -<p>The tool has already proven to be very useful and will continue to be actively developed as time and priority permits. Mimecast intends to be an ongoing contributor to Open Source. You can find I/O Riot at:</p> -<a class="textlink" href="https://github.com/mimecast/ioriot">https://github.com/mimecast/ioriot</a><br /> -<h2>Systemtap</h2> -<p>Systemtap is a tool for the instrumentation of the Linux kernel. The tool provides an AWK-like programming language. Programs written in it are compiled from Systemtap to C- and then into a dynamically loadable kernel module. Loaded into the kernel, the program has access to Linux internals. A Systemtap program written for I/O Riot monitors when, with which parameters, at which time, and from which process I/O syscalls take place and their return values.</p> -<p>For example, the open syscall opens a file and returns the responsible file descriptor. The read and write syscalls can operate on a file descriptor and return the number of read or written bytes. The close syscall closes a given file descriptor. I/O Riot comes with a ready-made Systemtap program, which you have already compiled into a kernel module and installed to /opt/ioriot. In addition to open, read and close, it logs many other I/O-relevant calls.</p> -<a class="textlink" href="https://sourceware.org/systemtap/">https://sourceware.org/systemtap/</a><br /> -<h2>More refereces</h2> -<a class="textlink" href="http://www.iozone.org/">IOZone</a><br /> -<a class="textlink" href="https://www.coker.com.au/bonnie++/">Bonnie++</a><br /> -<a class="textlink" href="https://graphiteapp.org">Graphite</a><br /> -<a class="textlink" href="https://en.wikipedia.org/wiki/Memory-mapped_I/O">Memory mapped I/O</a><br /> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>Methods in C</title> - <link href="gemini://buetow.org/gemfeed/2016-11-20-methods-in-c.gmi" /> - <id>gemini://buetow.org/gemfeed/2016-11-20-methods-in-c.gmi</id> - <updated>2016-11-20T18:36:51+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>You can do some sort of object oriented programming in the C Programming Language. However, that is very limited. But also very easy and straight forward to use.. .....to read on please visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>Methods in C</h1> -<p class="quote"><i>Written by Paul Buetow 2016-11-20</i></p> -<p>You can do some sort of object oriented programming in the C Programming Language. However, that is very limited. But also very easy and straight forward to use.</p> -<h2>Example</h2> -<p>Lets have a look at the following sample program. Basically all you have to do is to add a function pointer such as "calculate" to the definition of struct "something_s". Later, during the struct initialization, assign a function address to that function pointer:</p> -<pre> -#include <stdio.h> - -typedef struct { - double (*calculate)(const double, const double); - char *name; -} something_s; - -double multiplication(const double a, const double b) { - return a * b; -} - -double division(const double a, const double b) { - return a / b; -} - -int main(void) { - something_s mult = (something_s) { - .calculate = multiplication, - .name = "Multiplication" - }; - - something_s div = (something_s) { - .calculate = division, - .name = "Division" - }; - - const double a = 3, b = 2; - - printf("%s(%f, %f) => %f\n", mult.name, a, b, mult.calculate(a,b)); - printf("%s(%f, %f) => %f\n", div.name, a, b, div.calculate(a,b)); -} -</pre> -<p>As you can see you can call the function (pointed by the function pointer) the same way as in C++ or Java via:</p> -<pre> -printf("%s(%f, %f) => %f\n", mult.name, a, b, mult.calculate(a,b)); -printf("%s(%f, %f) => %f\n", div.name, a, b, div.calculate(a,b)); -</pre> -<p>However, that's just syntactic sugar for:</p> -<pre> -printf("%s(%f, %f) => %f\n", mult.name, a, b, (*mult.calculate)(a,b)); -printf("%s(%f, %f) => %f\n", div.name, a, b, (*div.calculate)(a,b)); -</pre> -<p>Output:</p> -<pre> -pbuetow ~/git/blog/source [38268]% gcc methods-in-c.c -o methods-in-c -pbuetow ~/git/blog/source [38269]% ./methods-in-c -Multiplication(3.000000, 2.000000) => 6.000000 -Division(3.000000, 2.000000) => 1.500000 -</pre> -<p>Not complicated at all, but nice to know and helps to make the code easier to read!</p> -<h2>The flaw</h2> -<p>That's actually not really how it works in object oriented languages such as Java and C++. The method call in this example is not really a method call as "mult" and "div" in this example are not "message receivers". What I mean by that is that the functions can not access the state of the "mult" and "div" struct objects. In C you would need to do something like this instead if you wanted to access the state of "mult" from within the calculate function, you would have to pass it as an argument:</p> -<pre> -mult.calculate(mult,a,b)); -</pre> -<p>How to overcome this? You need to take it further...</p> -<h2>Taking it further</h2> -<p>If you want to take it further type "Object-Oriented Programming with ANSI-C" into your favorite internet search engine, you will find some crazy stuff. Some go as far as writing a C preprocessor in AWK, which takes some object oriented pseudo-C and transforms it to plain C so that the C compiler can compile it to machine code. This is actually similar to how the C++ language had its origins.</p> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>Spinning up my own authoritative DNS servers</title> - <link href="gemini://buetow.org/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi" /> - <id>gemini://buetow.org/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi</id> - <updated>2016-05-22T18:59:01+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>Finally, I had time to deploy my own authoritative DNS servers (master and slave) for my domains 'buetow.org' and 'buetow.zone'. My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to set your own authoritative DNS servers for your domains. From now I am making use of that option.. .....to read on please visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>Spinning up my own authoritative DNS servers</h1> -<p class="quote"><i>Written by Paul Buetow 2016-05-22</i></p> -<h2>Background</h2> -<p>Finally, I had time to deploy my own authoritative DNS servers (master and slave) for my domains "buetow.org" and "buetow.zone". My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to set your own authoritative DNS servers for your domains. From now, I am making use of that option.</p> -<a class="textlink" href="http://www.schlundtech.de">Schlund Technologies</a><br /> -<h2>All FreeBSD Jails</h2> -<p>In order to set up my authoritative DNS servers I installed a FreeBSD Jail dedicated for DNS with Puppet on my root machine as follows:</p> -<pre> -include freebsd - -freebsd::ipalias { '2a01:4f8:120:30e8::14': - ensure => up, - proto => 'inet6', - preflen => '64', - interface => 're0', - aliasnum => '5', -} - -include jail::freebsd - -class { 'jail': - ensure => present, - jails_config => { - dns => { - '_ensure' => present, - '_type' => 'freebsd', - '_mirror' => 'ftp://ftp.de.freebsd.org', - '_remote_path' => 'FreeBSD/releases/amd64/10.1-RELEASE', - '_dists' => [ 'base.txz', 'doc.txz', ], - '_ensure_directories' => [ '/opt', '/opt/enc' ], - 'host.hostname' => "'dns.ian.buetow.org'", - 'ip4.addr' => '192.168.0.15', - 'ip6.addr' => '2a01:4f8:120:30e8::15', - }, - . - . - } -} -</pre> -<h2>PF firewall</h2> -<p>Please note that "dns.ian.buetow.org" is just the Jail name of the master DNS server (and "caprica.ian.buetow.org" the name of the Jail for the slave DNS server) and that I am using the DNS names "dns1.buetow.org" (master) and "dns2.buetow.org" (slave) for the actual service names (these are the DNS servers visible to the public). Please also note that the IPv4 address is an internal one. I have a PF to use NAT and PAT. The DNS ports are being forwarded (TCP and UDP) to that Jail. By default, all ports are blocked, so I am adding an exception rule for the IPv6 address as well. These are the PF rules in use:</p> -<pre> -% cat /etc/pf.conf -. -. -# dns.ian.buetow.org -rdr pass on re0 proto tcp from any to $pub_ip port {53} -> 192.168.0.15 -rdr pass on re0 proto udp from any to $pub_ip port {53} -> 192.168.0.15 -pass in on re0 inet6 proto tcp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state -pass in on re0 inet6 proto udp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state -. -. -</pre> -<h2>Puppet managed BIND zone files</h2> -<p>In "manifests/dns.pp" (the Puppet manifest for the Master DNS Jail itself) I configured the BIND DNS server this way:</p> -<pre> -class { 'bind_freebsd': - config => "puppet:///files/bind/named.${::hostname}.conf", - dynamic_config => "puppet:///files/bind/dynamic.${::hostname}", -} -</pre> -<p>The Puppet module is actually a pretty simple one. It installs the file "/usr/local/etc/named/named.conf" and it populates the "/usr/local/etc/named/dynamicdb" directory with all my zone files.</p> -<p>Once (Puppet-) applied inside of the Jail I get this:</p> -<pre> -paul uranus:~/git/blog/source [4268]% ssh admin@dns1.buetow.org.buetow.org pgrep -lf named -60748 /usr/local/sbin/named -u bind -c /usr/local/etc/namedb/named.conf -paul uranus:~/git/blog/source [4269]% ssh admin@dns1.buetow.org.buetow.org tail -n 13 /usr/local/etc/namedb/named.conf -zone "buetow.org" { - type master; - notify yes; - allow-update { key "buetoworgkey"; }; - file "/usr/local/etc/namedb/dynamic/buetow.org"; -}; - -zone "buetow.zone" { - type master; - notify yes; - allow-update { key "buetoworgkey"; }; - file "/usr/local/etc/namedb/dynamic/buetow.zone"; -}; -paul uranus:~/git/blog/source [4277]% ssh admin@dns1.buetow.org.buetow.org cat /usr/local/etc/namedb/dynamic/buetow.org -$TTL 3600 -@ IN SOA dns1.buetow.org. domains.buetow.org. ( - 25 ; Serial - 604800 ; Refresh - 86400 ; Retry - 2419200 ; Expire - 604800 ) ; Negative Cache TTL -; Infrastructure domains -@ IN NS dns1 -@ IN NS dns2 -* 300 IN CNAME web.ian -buetow.org. 86400 IN A 78.46.80.70 -buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8:0:0:0:11 -buetow.org. 86400 IN MX 10 mail.ian -dns1 86400 IN A 78.46.80.70 -dns1 86400 IN AAAA 2a01:4f8:120:30e8:0:0:0:15 -dns2 86400 IN A 164.177.171.32 -dns2 86400 IN AAAA 2a03:2500:1:6:20:: -. -. -. -. -</pre> -<p>That is my master DNS server. My slave DNS server runs in another Jail on another bare metal machine. Everything is set up similar to the master DNS server. However, that server is located in a different DC and in different IP subnets. The only difference is the "named.conf". It's configured to be a slave and that means that the "dynamicdb" gets populated by BIND itself while doing zone transfers from the master.</p> -<pre> -paul uranus:~/git/blog/source [4279]% ssh admin@dns2.buetow.org tail -n 11 /usr/local/etc/namedb/named.conf -zone "buetow.org" { - type slave; - masters { 78.46.80.70; }; - file "/usr/local/etc/namedb/dynamic/buetow.org"; -}; - -zone "buetow.zone" { - type slave; - masters { 78.46.80.70; }; - file "/usr/local/etc/namedb/dynamic/buetow.zone"; -}; -</pre> -<h2>The end result</h2> -<p>The end result looks like this now:</p> -<pre> -% dig -t ns buetow.org -; <<>> DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 <<>> -t ns buetow.org -;; global options: +cmd -;; Got answer: -;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37883 -;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 - -;; OPT PSEUDOSECTION: -; EDNS: version: 0, flags:; udp: 512 -;; QUESTION SECTION: -;buetow.org. IN NS - -;; ANSWER SECTION: -buetow.org. 600 IN NS dns2.buetow.org. -buetow.org. 600 IN NS dns1.buetow.org. - -;; Query time: 41 msec -;; SERVER: 192.168.1.254#53(192.168.1.254) -;; WHEN: Sun May 22 11:34:11 BST 2016 -;; MSG SIZE rcvd: 77 - -% dig -t any buetow.org @dns1.buetow.org -; <<>> DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 <<>> -t any buetow.org @dns1.buetow.org -;; global options: +cmd -;; Got answer: -;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49876 -;; flags: qr aa rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 7 - -;; OPT PSEUDOSECTION: -; EDNS: version: 0, flags:; udp: 4096 -;; QUESTION SECTION: -;buetow.org. IN ANY - -;; ANSWER SECTION: -buetow.org. 86400 IN A 78.46.80.70 -buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::11 -buetow.org. 86400 IN MX 10 mail.ian.buetow.org. -buetow.org. 3600 IN SOA dns1.buetow.org. domains.buetow.org. 25 604800 86400 2419200 604800 -buetow.org. 3600 IN NS dns2.buetow.org. -buetow.org. 3600 IN NS dns1.buetow.org. - -;; ADDITIONAL SECTION: -mail.ian.buetow.org. 86400 IN A 78.46.80.70 -dns1.buetow.org. 86400 IN A 78.46.80.70 -dns2.buetow.org. 86400 IN A 164.177.171.32 -mail.ian.buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::12 -dns1.buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::15 -dns2.buetow.org. 86400 IN AAAA 2a03:2500:1:6:20:: - -;; Query time: 42 msec -;; SERVER: 78.46.80.70#53(78.46.80.70) -;; WHEN: Sun May 22 11:34:41 BST 2016 -;; MSG SIZE rcvd: 322 -</pre> -<h2>Monitoring</h2> -<p>For monitoring I am using Icinga2 (I am operating two Icinga2 instances in two different DCs). I may have to post another blog article about Icinga2 but to get the idea these were the snippets added to my Icinga2 configuration:</p> -<pre> -apply Service "dig" { - import "generic-service" - - check_command = "dig" - vars.dig_lookup = "buetow.org" - vars.timeout = 30 - - assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org" -} - -apply Service "dig6" { - import "generic-service" - - check_command = "dig" - vars.dig_lookup = "buetow.org" - vars.timeout = 30 - vars.check_ipv6 = true - - assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org" -} -</pre> -<h2>DNS update workflow</h2> -<p>Whenever I have to change a DNS entry all have to do is:</p> -<ul> -<li>Git clone or update the Puppet repository</li> -<li>Update/commit and push the zone file (e.g. "buetow.org")</li> -<li>Wait for Puppet. Puppet will deploy that updated zone file. And it will reload the BIND server.</li> -<li>The BIND server will notify all slave DNS servers (at the moment only one). And it will transfer the new version of the zone.</li> -</ul> -<p>That's much more comfortable now than manually clicking at some web UIs at Schlund Technologies.</p> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>Offsite backup with ZFS (Part 2)</title> - <link href="gemini://buetow.org/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.gmi" /> - <id>gemini://buetow.org/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.gmi</id> - <updated>2016-04-16T22:43:42+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>I enhanced the procedure a bit. From now on I am having two external 2TB USB hard drives. Both are setup exactly the same way. To decrease the probability that they will not fail at about the same time both drives are of different brands. One drive is kept at the secret location. The other one is kept at home right next to my HP MicroServer. ...to read on visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>Offsite backup with ZFS (Part 2)</h1> -<pre> - ________________ -|# : : #| -| : ZFS/GELI : |________________ -| : Offsite : |# : : #| -| : Backup 1 : | : ZFS/GELI : | -| :___________: | : Offsite : | -| _________ | : Backup 2 : | -| | __ | | :___________: | -| || | | | _________ | -\____||__|_____|_| | __ | | - | || | | | - \____||__|_____|__| -</pre> -<p class="quote"><i>Written by Paul Buetow 2016-04-16</i></p> -<a class="textlink" href="https://buetow.org/gemfeed/2016-04-03-offsite-backup-with-zfs.html">Read the first part before reading any furter here...</a><br /> -<p>I enhanced the procedure a bit. From now on I am having two external 2TB USB hard drives. Both are setup exactly the same way. To decrease the probability that they will not fail at about the same time both drives are of different brands. One drive is kept at the secret location. The other one is kept at home right next to my HP MicroServer.</p> -<p>Whenever I am updating offsite backup, I am doing it to the drive which is kept locally. Afterwards I bring it to the secret location and swap the drives and bring the other one back home. This ensures that I will always have an offiste backup available at a different location than my home - even while updating one copy of it.</p> -<p>Furthermore, I added scrubbing (*zpool scrub...*) to the script. It ensures that the file system is consistent and that there are no bad blocks on the disk and the file system. To increase the reliability I also run a *zfs set copies=2 zroot*. That setting is also synchronized to the offsite ZFS pool. ZFS stores every data block to disk twice now. Yes, it consumes twice as much disk space but it makes it better fault tolerant against hardware errors (e.g. only individual disk sectors going bad). </p> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>Jails and ZFS with Puppet on FreeBSD</title> - <link href="gemini://buetow.org/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi" /> - <id>gemini://buetow.org/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi</id> - <updated>2016-04-09T18:29:47+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>Over the last couple of years I wrote quite a few Puppet modules in order to manage my personal server infrastructure. One of them manages FreeBSD Jails and another one ZFS file systems. I thought I would give a brief overview in how it looks and feels.. .....to read on please visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>Jails and ZFS with Puppet on FreeBSD</h1> -<pre> - __ __ - (( \---/ )) - )__ __( - / ()___() \ - \ /(_)\ / - \ \_|_/ / - _______> <_______ - //\ |>o<| /\\ - \\/___ ___\// - | | - | | - | | - | | - `--....---' - \ \ - \ `. hjw - \ `. -</pre> -<p class="quote"><i>Written by Paul Buetow 2016-04-09</i></p> -<p>Over the last couple of years I wrote quite a few Puppet modules in order to manage my personal server infrastructure. One of them manages FreeBSD Jails and another one ZFS file systems. I thought I would give a brief overview in how it looks and feels.</p> -<a class="textlink" href="https://github.com/snonux/puppet-modules">https://github.com/snonux/puppet-modules</a><br /> -<h2>ZFS</h2> -<p>The ZFS module is a pretty basic one. It does not manage ZFS pools yet as I am not creating them often enough which would justify implementing an automation. But let's see how we can create a ZFS file system (on an already given ZFS pool named ztank):</p> -<p>Puppet snippet:</p> -<pre> -zfs::create { 'ztank/foo': - ensure => present, - filesystem => '/srv/foo', - - require => File['/srv'], -} -</pre> -<p>Puppet run:</p> -<pre> -admin alphacentauri:/opt/git/server/puppet/manifests [1212]% puppet.apply -Password: -Info: Loading facts -Info: Loading facts -Info: Loading facts -Info: Loading facts -Notice: Compiled catalog for alphacentauri.home in environment production in 7.14 seconds -Info: Applying configuration version '1460189837' -Info: mount[files]: allowing * access -Info: mount[restricted]: allowing * access -Notice: /Stage[main]/Main/Node[alphacentauri]/Zfs::Create[ztank/foo]/Exec[ztank/foo_create]/returns: executed successfully -Notice: Finished catalog run in 25.41 seconds -admin alphacentauri:~ [1213]% zfs list | grep foo -ztank/foo 96K 1.13T 96K /srv/foo -admin alphacentauri:~ [1214]% df | grep foo -ztank/foo 1214493520 96 1214493424 0% /srv/foo -admin alphacentauri:~ [1215]% -</pre> -<p>The destruction of the file system just requires to set "ensure" to "absent" in Puppet:</p> -<pre> -zfs::create { 'ztank/foo': - ensure => absent, - filesystem => '/srv/foo', - - require => File['/srv'], -}¬ -</pre> -<p>Puppet run:</p> -<pre> -admin alphacentauri:/opt/git/server/puppet/manifests [1220]% puppet.apply -Password: -Info: Loading facts -Info: Loading facts -Info: Loading facts -Info: Loading facts -Notice: Compiled catalog for alphacentauri.home in environment production in 6.14 seconds -Info: Applying configuration version '1460190203' -Info: mount[files]: allowing * access -Info: mount[restricted]: allowing * access -Notice: /Stage[main]/Main/Node[alphacentauri]/Zfs::Create[ztank/foo]/Exec[zfs destroy -r ztank/foo]/returns: executed successfully -Notice: Finished catalog run in 22.72 seconds -admin alphacentauri:/opt/git/server/puppet/manifests [1221]% zfs list | grep foo -zsh: done zfs list | -zsh: exit 1 grep foo -admin alphacentauri:/opt/git/server/puppet/manifests [1222:1]% df | grep foo -zsh: done df | -zsh: exit 1 grep foo -</pre> -<h2>Jails</h2> -<p>Here is an example in how a FreeBSD Jail can be created. The Jail will have its own public IPv6 address. And it will have its own internal IPv4 address with IPv4 NAT to the internet (this is due to the limitation that the host server only got one public IPv4 address which requires sharing between all the Jails).</p> -<p>Furthermore, Puppet will ensure that the Jail will have its own ZFS file system (internally it is using the ZFS module). Please notice that the NAT requires the packet filter to be setup correctly (not covered in this blog post).</p> -<pre> -include jail::freebsd - -# Cloned interface for Jail IPv4 NAT -freebsd::rc_config { 'cloned_interfaces': - value => 'lo1', -} -freebsd::rc_config { 'ipv4_addrs_lo1': - value => '192.168.0.1-24/24' -} - -freebsd::ipalias { '2a01:4f8:120:30e8::17': - ensure => up, - proto => 'inet6', - preflen => '64', - interface => 're0', - aliasnum => '8', -} - -class { 'jail': - ensure => present, - jails_config => { - sync => { - '_ensure' => present, - '_type' => 'freebsd', - '_mirror' => 'ftp://ftp.de.freebsd.org', - '_remote_path' => 'FreeBSD/releases/amd64/10.1-RELEASE', - '_dists' => [ 'base.txz', 'doc.txz', ], - '_ensure_directories' => [ '/opt', '/opt/enc' ], - '_ensure_zfs' => [ '/sync' ], - 'host.hostname' => "'sync.ian.buetow.org'", - 'ip4.addr' => '192.168.0.17', - 'ip6.addr' => '2a01:4f8:120:30e8::17', - }, - } -} -</pre> -<p>This is how the result looks like:</p> -<pre> -admin sun:/etc [1939]% puppet.apply -Info: Loading facts -Info: Loading facts -Info: Loading facts -Info: Loading facts -Notice: Compiled catalog for sun.ian.buetow.org in environment production in 1.80 seconds -Info: Applying configuration version '1460190986' -Notice: /Stage[main]/Jail/File[/etc/jail.conf]/ensure: created -Info: mount[files]: allowing * access -Info: mount[restricted]: allowing * access -Info: Computing checksum on file /etc/motd -Info: /Stage[main]/Motd/File[/etc/motd]: Filebucketed /etc/motd to puppet with sum fced1b6e89f50ef2c40b0d7fba9defe8 -Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync]/ensure: created -Notice: /Stage[main]/Jail/Jail::Create[sync]/Zfs::Create[zroot/jail/sync]/Exec[zroot/jail/sync_create]/returns: executed successfully -Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync/opt]/ensure: created -Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync/opt/enc]/ensure: created -Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Ensure_zfs[/sync]/Zfs::Create[zroot/jail/sync/sync]/Exec[zroot/jail/sync/sync_create]/returns: executed successfully -Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/jail/sync/.jailbootstrap]/ensure: created -Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/etc/fstab.jail.sync]/ensure: created -Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/jail/sync/.jailbootstrap/bootstrap.sh]/ensure: created -Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/Exec[sync_bootstrap]/returns: executed successfully -Notice: Finished catalog run in 49.72 seconds -admin sun:/etc [1942]% ls -l /jail/sync -total 154 --r--r--r-- 1 root wheel 6198 11 Nov 2014 COPYRIGHT -drwxr-xr-x 2 root wheel 47 11 Nov 2014 bin -drwxr-xr-x 7 root wheel 43 11 Nov 2014 boot -dr-xr-xr-x 2 root wheel 2 11 Nov 2014 dev -drwxr-xr-x 23 root wheel 101 9 Apr 10:37 etc -drwxr-xr-x 3 root wheel 50 11 Nov 2014 lib -drwxr-xr-x 3 root wheel 4 11 Nov 2014 libexec -drwxr-xr-x 2 root wheel 2 11 Nov 2014 media -drwxr-xr-x 2 root wheel 2 11 Nov 2014 mnt -drwxr-xr-x 3 root wheel 3 9 Apr 10:36 opt -dr-xr-xr-x 2 root wheel 2 11 Nov 2014 proc -drwxr-xr-x 2 root wheel 143 11 Nov 2014 rescue -drwxr-xr-x 2 root wheel 6 11 Nov 2014 root -drwxr-xr-x 2 root wheel 132 11 Nov 2014 sbin -drwxr-xr-x 2 root wheel 2 9 Apr 10:36 sync -lrwxr-xr-x 1 root wheel 11 11 Nov 2014 sys -> usr/src/sys -drwxrwxrwt 2 root wheel 2 11 Nov 2014 tmp -drwxr-xr-x 14 root wheel 14 11 Nov 2014 usr -drwxr-xr-x 24 root wheel 24 11 Nov 2014 var -admin sun:/etc [1943]% zfs list | grep sync;df | grep sync -zroot/jail/sync 162M 343G 162M /jail/sync -zroot/jail/sync/sync 144K 343G 144K /jail/sync/sync -/opt/enc 5061624 84248 4572448 2% /jail/sync/opt/enc -zroot/jail/sync 360214972 166372 360048600 0% /jail/sync -zroot/jail/sync/sync 360048744 144 360048600 0% /jail/sync/sync -admin sun:/etc [1944]% cat /etc/fstab.jail.sync -# Generated by Puppet for a Jail. -# Can contain file systems to be mounted curing jail start. -admin sun:/etc [1945]% cat /etc/jail.conf -# Generated by Puppet - -allow.chflags = true; -exec.start = '/bin/sh /etc/rc'; -exec.stop = '/bin/sh /etc/rc.shutdown'; -mount.devfs = true; -mount.fstab = "/etc/fstab.jail.$name"; -path = "/jail/$name"; - -sync { - host.hostname = 'sync.ian.buetow.org'; - ip4.addr = 192.168.0.17; - ip6.addr = 2a01:4f8:120:30e8::17; -} -admin sun:/etc [1955]% sudo service jail start sync -Password: -Starting jails: sync. -admin sun:/etc [1956]% jls | grep sync - 103 192.168.0.17 sync.ian.buetow.org /jail/sync -admin sun:/etc [1957]% sudo jexec 103 /bin/csh -root@sync:/ # ifconfig -a -re0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 - options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE> - ether 50:46:5d:9f:fd:1e - inet6 2a01:4f8:120:30e8::17 prefixlen 64 - nd6 options=8021<PERFORMNUD,AUTO_LINKLOCAL,DEFAULTIF> - media: Ethernet autoselect (1000baseT <full-duplex>) - status: active -lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 - options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6> - nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> - lo1: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 - options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6> - inet 192.168.0.17 netmask 0xffffffff - nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL> -</pre> -<h2>Inside-Jail Puppet</h2> -<p>To automatically setup the applications running in the Jail I am using Puppet as well. I wrote a few scripts which bootstrap Puppet inside of a newly created Jail. It is doing the following:</p> -<ul> -<li>Mounts an encrypted container (containing a secret Puppet manifests [git repository])</li> -<li>Activates "pkg-ng", the FreeBSD binary package manager, in the Jail</li> -<li>Installs Puppet plus all dependencies in the Jail</li> -<li>Updates the Jail via "freebsd-update" to the latest version</li> -<li>Restarts the Jail and invokes Puppet.</li> -<li>Puppet then also schedules a periodic cron job for the next Puppet runs.</li> -</ul> -<pre> -admin sun:~ [1951]% sudo /opt/snonux/local/etc/init.d/enc activate sync -Starting jails: dns. -The package management tool is not yet installed on your system. -Do you want to fetch and install it now? [y/N]: y -Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/freebsd:10:x86:64/latest, please wait... -Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done -[sync.ian.buetow.org] Installing pkg-1.7.2... -[sync.ian.buetow.org] Extracting pkg-1.7.2: 100% -Updating FreeBSD repository catalogue... -[sync.ian.buetow.org] Fetching meta.txz: 100% 944 B 0.9kB/s 00:01 -[sync.ian.buetow.org] Fetching packagesite.txz: 100% 5 MiB 5.6MB/s 00:01 -Processing entries: 100% -FreeBSD repository update completed. 25091 packages processed. -Updating database digests format: 100% -The following 20 package(s) will be affected (of 0 checked): - - New packages to be INSTALLED: - git: 2.7.4_1 - expat: 2.1.0_3 - python27: 2.7.11_1 - libffi: 3.2.1 - indexinfo: 0.2.4 - gettext-runtime: 0.19.7 - p5-Error: 0.17024 - perl5: 5.20.3_9 - cvsps: 2.1_1 - p5-Authen-SASL: 2.16_1 - p5-Digest-HMAC: 1.03_1 - p5-GSSAPI: 0.28_1 - curl: 7.48.0_1 - ca_root_nss: 3.22.2 - p5-Net-SMTP-SSL: 1.03 - p5-IO-Socket-SSL: 2.024 - p5-Net-SSLeay: 1.72 - p5-IO-Socket-IP: 0.37 - p5-Socket: 2.021 - p5-Mozilla-CA: 20160104 - - The process will require 144 MiB more space. - 30 MiB to be downloaded. -[sync.ian.buetow.org] Fetching git-2.7.4_1.txz: 100% 4 MiB 3.7MB/s 00:01 -[sync.ian.buetow.org] Fetching expat-2.1.0_3.txz: 100% 98 KiB 100.2kB/s 00:01 -[sync.ian.buetow.org] Fetching python27-2.7.11_1.txz: 100% 10 MiB 10.7MB/s 00:01 -[sync.ian.buetow.org] Fetching libffi-3.2.1.txz: 100% 35 KiB 36.2kB/s 00:01 -[sync.ian.buetow.org] Fetching indexinfo-0.2.4.txz: 100% 5 KiB 5.0kB/s 00:01 -[sync.ian.buetow.org] Fetching gettext-runtime-0.19.7.txz: 100% 148 KiB 151.1kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Error-0.17024.txz: 100% 24 KiB 24.8kB/s 00:01 -[sync.ian.buetow.org] Fetching perl5-5.20.3_9.txz: 100% 13 MiB 6.9MB/s 00:02 -[sync.ian.buetow.org] Fetching cvsps-2.1_1.txz: 100% 41 KiB 42.1kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Authen-SASL-2.16_1.txz: 100% 44 KiB 45.1kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Digest-HMAC-1.03_1.txz: 100% 9 KiB 9.5kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-GSSAPI-0.28_1.txz: 100% 41 KiB 41.7kB/s 00:01 -[sync.ian.buetow.org] Fetching curl-7.48.0_1.txz: 100% 2 MiB 2.2MB/s 00:01 -[sync.ian.buetow.org] Fetching ca_root_nss-3.22.2.txz: 100% 324 KiB 331.4kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Net-SMTP-SSL-1.03.txz: 100% 11 KiB 10.8kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-IO-Socket-SSL-2.024.txz: 100% 153 KiB 156.4kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Net-SSLeay-1.72.txz: 100% 234 KiB 239.3kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-IO-Socket-IP-0.37.txz: 100% 27 KiB 27.4kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Socket-2.021.txz: 100% 37 KiB 38.0kB/s 00:01 -[sync.ian.buetow.org] Fetching p5-Mozilla-CA-20160104.txz: 100% 147 KiB 150.8kB/s 00:01 -Checking integrity... -[sync.ian.buetow.org] [1/12] Installing libyaml-0.1.6_2... -[sync.ian.buetow.org] [1/12] Extracting libyaml-0.1.6_2: 100% -[sync.ian.buetow.org] [2/12] Installing libedit-3.1.20150325_2... -[sync.ian.buetow.org] [2/12] Extracting libedit-3.1.20150325_2: 100% -[sync.ian.buetow.org] [3/12] Installing ruby-2.2.4,1... -[sync.ian.buetow.org] [3/12] Extracting ruby-2.2.4,1: 100% -[sync.ian.buetow.org] [4/12] Installing ruby22-gems-2.6.2... -[sync.ian.buetow.org] [4/12] Extracting ruby22-gems-2.6.2: 100% -[sync.ian.buetow.org] [5/12] Installing libxml2-2.9.3... -[sync.ian.buetow.org] [5/12] Extracting libxml2-2.9.3: 100% -[sync.ian.buetow.org] [6/12] Installing dmidecode-3.0... -[sync.ian.buetow.org] [6/12] Extracting dmidecode-3.0: 100% -[sync.ian.buetow.org] [7/12] Installing rubygem-json_pure-1.8.3... -[sync.ian.buetow.org] [7/12] Extracting rubygem-json_pure-1.8.3: 100% -[sync.ian.buetow.org] [8/12] Installing augeas-1.4.0... -[sync.ian.buetow.org] [8/12] Extracting augeas-1.4.0: 100% -[sync.ian.buetow.org] [9/12] Installing rubygem-facter-2.4.4... -[sync.ian.buetow.org] [9/12] Extracting rubygem-facter-2.4.4: 100% -[sync.ian.buetow.org] [10/12] Installing rubygem-hiera1-1.3.4_1... -[sync.ian.buetow.org] [10/12] Extracting rubygem-hiera1-1.3.4_1: 100% -[sync.ian.buetow.org] [11/12] Installing rubygem-ruby-augeas-0.5.0_2... -[sync.ian.buetow.org] [11/12] Extracting rubygem-ruby-augeas-0.5.0_2: 100% -[sync.ian.buetow.org] [12/12] Installing puppet38-3.8.4_1... -===> Creating users and/or groups. -Creating group 'puppet' with gid '814'. -Creating user 'puppet' with uid '814'. -[sync.ian.buetow.org] [12/12] Extracting puppet38-3.8.4_1: 100% -. -. -. -. -. -Looking up update.FreeBSD.org mirrors... 4 mirrors found. -Fetching public key from update4.freebsd.org... done. -Fetching metadata signature for 10.1-RELEASE from update4.freebsd.org... done. -Fetching metadata index... done. -Fetching 2 metadata files... done. -Inspecting system... done. -Preparing to download files... done. -Fetching 874 patches.....10....20....30.... -. -. -. -Applying patches... done. -Fetching 1594 files... -Installing updates... -done. -Info: Loading facts -Info: Loading facts -Info: Loading facts -Info: Loading facts -Could not retrieve fact='pkgng_version', resolution='<anonymous>': undefined method `pkgng_enabled' for Facter:Module -Warning: Config file /usr/local/etc/puppet/hiera.yaml not found, using Hiera defaults -Notice: Compiled catalog for sync.ian.buetow.org in environment production in 1.31 seconds -Warning: Found multiple default providers for package: pkgng, gem, pip; using pkgng -Info: Applying configuration version '1460192563' -Notice: /Stage[main]/S_base_freebsd/User[root]/shell: shell changed '/bin/csh' to '/bin/tcsh' -Notice: /Stage[main]/S_user::Root_files/S_user::All_files[root_user]/File[/root/user]/ensure: created -Notice: /Stage[main]/S_user::Root_files/S_user::My_files[root]/File[/root/userfiles]/ensure: created -Notice: /Stage[main]/S_user::Root_files/S_user::My_files[root]/File[/root/.task]/ensure: created -. -. -. -. -Notice: Finished catalog run in 206.09 seconds -</pre> -<h2>Managing multiple Jails</h2> -<p>Of course I am operating multiple Jails on the same host this way with Puppet:</p> -<ul> -<li>A Jail for the MTA</li> -<li>A Jail for the Webserver</li> -<li>A Jail for BIND DNS server</li> -<li>A Jail for syncing data forth and back between various servers</li> -<li>A Jail for other personal (experimental) use</li> -<li>...etc</li> -</ul> -<p>All done in a pretty automated manor. </p> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>Offsite backup with ZFS</title> - <link href="gemini://buetow.org/gemfeed/2016-04-03-offsite-backup-with-zfs.gmi" /> - <id>gemini://buetow.org/gemfeed/2016-04-03-offsite-backup-with-zfs.gmi</id> - <updated>2016-04-03T22:43:42+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>When it comes to data storage and potential data loss I am a paranoid person. It is not just due to my job but also due to a personal experience I encountered over 10 years ago: A single drive failure and loss of all my data (pictures, music, ....). ...to read on visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>Offsite backup with ZFS</h1> -<pre> - ________________ -|# : : #| -| : ZFS/GELI : | -| : Offsite : | -| : Backup : | -| :___________: | -| _________ | -| | __ | | -| || | | | -\____||__|_____|__| -</pre> -<p class="quote"><i>Written by Paul Buetow 2016-04-03</i></p> -<h2>Please don't lose all my pictures again!</h2> -<p>When it comes to data storage and potential data loss I am a paranoid person. It is not just due to my job but also due to a personal experience I encountered over 10 years ago: A single drive failure and loss of all my data (pictures, music, ....).</p> -<p>A little about my personal infrastructure: I am running my own (mostly FreeBSD based) root servers (across several countries: Two in Germany, one in Canada, one in Bulgaria) which store all my online data (E-Mail and my Git repositories). I am syncing incremental (and encrypted) ZFS snapshots between these servers forth and back so either data could be recovered from the other server.</p> -<h2>Local storage box for offline data</h2> -<p>Also, I am operating a local server (an HP MicroServer) at home in my apartment. Full snapshots of all ZFS volumes are pulled from the "online" servers to the local server every other week and the incremental ZFS snapshots every day. That local server has a ZFS ZMIRROR with 3 disks configured (local triple redundancy). I keep up to half a year worth of ZFS snapshots of all volumes. That local server also contains all my offline data such as pictures, private documents, videos, books, various other backups, etc.</p> -<p>Once weekly all the data of that local server is copied to two external USB drives as a backup (without the historic snapshots). For simplicity these USB drives are not formatted with ZFS but with good old UFS. This gives me a chance to recover from a (potential) ZFS disaster. ZFS is a complex thing. Sometimes it is good not to trust complex things!</p> -<h2>Storing it at my apartment is not enough</h2> -<p>Now I am thinking about an offsite backup of all this local data. The problem is, that all the data remains on a single physical location: My local MicroServer. What happens when the house burns or someone steals my server including the internal disks and the attached USB drives? My first thought was to back up everything to the "cloud". The major issue here is however the limited amount of available upload bandwidth (only 1MBit/s).</p> -<p>The solution is adding another USB drive (2TB) with an encryption container (GELI) and a ZFS pool on it. The GELI encryption requires a secret key and a secret passphrase. I am updating the data to that drive once every 3 months (my calendar is reminding me about it) and afterwards I keep that drive at a secret location outside of my apartment. All the information needed to decrypt (mounting the GELI container) is stored at another (secure) place. Key and passphrase are kept at different places though. Even if someone would know of it, he would not be able to decrypt it as some additional insider knowledge would be required as well.</p> -<h2>Walking one round less</h2> -<p>I am thinking of buying a second 2TB USB drive and to set it up the same way as the first one. So I could alternate the backups. One drive would be at the secret location, and the other drive would be at home. And these drives would swap location after each cycle. This would give some security about the failure of that drive and I would have to go to the secret location only once (swapping the drives) instead of twice (picking that drive up in order to update the data + bringing it back to the secret location).</p> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>Run Debian on your phone with Debroid</title> - <link href="gemini://buetow.org/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid.gmi" /> - <id>gemini://buetow.org/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid.gmi</id> - <updated>2015-12-05T16:12:57+00:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>You can use the following tutorial to install a full blown Debian GNU/Linux Chroot on a LG G3 D855 CyanogenMod 13 (Android 6). First of all you need to have root permissions on your phone and you also need to have the developer mode activated. The following steps have been tested on Linux (Fedora 23). .....to read on please visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>Run Debian on your phone with Debroid</h1> -<pre> - ____ _ _ _ -| _ \ ___| |__ _ __ ___ (_) __| | -| | | |/ _ \ '_ \| '__/ _ \| |/ _` | -| |_| | __/ |_) | | | (_) | | (_| | -|____/ \___|_.__/|_| \___/|_|\__,_| - -</pre> -<p class="quote"><i>Written by Paul Buetow 2015-12-05, last updated 2021-05-16</i></p> -<p>You can use the following tutorial to install a full-blown Debian GNU/Linux Chroot on a LG G3 D855 CyanogenMod 13 (Android 6). First of all you need to have root permissions on your phone and you also need to have the developer mode activated. The following steps have been tested on Linux (Fedora 23).</p> -<a href="https://buetow.org/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid/Deboroid.png"><img src="https://buetow.org/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid/Deboroid.png" /></a><br /> -<h2>Foreword</h2> -<p>A couple of years have passed since I last worked on Debroid. At the moment I am using the Termux app on Android, which is less sophisticated than a fully blown Debian installation, but sufficient for my current requirements. The content of this site may be still relevant and it would also work with more recent versions of Debian and Android. I would expect that some minor modifications need to be made though. </p> -<h2>Step by step guide</h2> -<p>All scripts mentioned here can be found on GitHub at:</p> -<a class="textlink" href="https://github.com/snonux/debroid">https://github.com/snonux/debroid</a><br /> -<h3>First debootstrap stage</h3> -<p>This is to be performed on a Fedora Linux machine (could work on a Debian too, but Fedora is just what I use on my personal Laptop). The following steps prepare an initial Debian base image, which then later can be transferred to the phone.</p> -<pre> -sudo dnf install debootstrap -# 5g -dd if=/dev/zero of=jessie.img bs=$[ 1024 * 1024 ] \ - count=$[ 1024 * 5 ] - -# Show used loop devices -sudo losetup -f -# Store the next free one to $loop -loop=loopN -sudo losetup /dev/$loop jessie.img - -mkdir jessie -sudo mkfs.ext4 /dev/$loop -sudo mount /dev/$loop jessie -sudo debootstrap --foreign --variant=minbase \ - --arch armel jessie jessie/ \ - http://http.debian.net/debian -sudo umount jessie -</pre> -<h3>Copy Debian image to the phone</h3> -<p>Now setup the Debian image on an external SD card on the Phone via Android Debugger as follows:</p> -<pre> -adb root && adb wait-for-device && adb shell -mkdir -p /storage/sdcard1/Linux/jessie -exit - -# Sparse image problem, may be too big for copying otherwise -gzip jessie.img -# Copy over -adb push jessie.img.gz /storage/sdcard1/Linux/jessie.img.gz -adb shell -cd /storage/sdcard1/Linux -gunzip jessie.img.gz - -# Show used loop devices -losetup -f -# Store the next free one to $loop -loop=loopN - -# Use the next free one (replace the loop number) -losetup /dev/block/$loop $(pwd)/jessie.img -mount -t ext4 /dev/block/$loop $(pwd)/jessie - -# Bind-Mound proc, dev, sys` -busybox mount --bind /proc $(pwd)/jessie/proc -busybox mount --bind /dev $(pwd)/jessie/dev -busybox mount --bind /dev/pts $(pwd)/jessie/dev/pts -busybox mount --bind /sys $(pwd)/jessie/sys - -# Bind-Mound the rest of Android -mkdir -p $(pwd)/jessie/storage/sdcard{0,1} -busybox mount --bind /storage/emulated \ - $(pwd)/jessie/storage/sdcard0 -busybox mount --bind /storage/sdcard1 \ - $(pwd)/jessie/storage/sdcard1 - -# Check mounts -mount | grep jessie -</pre> -<h3>Second debootstrap stage</h3> -<p>This is to be performed on the Android phone itself (inside a Debian chroot):</p> -<pre> -chroot $(pwd)/jessie /bin/bash -l -export PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin -/debootstrap/debootstrap --second-stage -exit # Leave chroot -exit # Leave adb shell -</pre> -<h3>Setup of various scripts</h3> -<p>jessie.sh deals with all the loopback mount magic and so on. It will be run later every time you start Debroid on your phone.</p> -<pre> -# Install script jessie.sh -adb push storage/sdcard1/Linux/jessie.sh /storage/sdcard/Linux/jessie.sh -adb shell -cd /storage/sdcard1/Linux -sh jessie.sh enter - -# Bashrc -cat <<END >~/.bashrc -export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH -export EDITOR=vim -hostname $(cat /etc/hostname) -END - -# Fixing an error message while loading the profile -sed -i s#id#/usr/bin/id# /etc/profile - -# Setting the hostname -echo phobos > /etc/hostname -echo 127.0.0.1 phobos > /etc/hosts -hostname phobos - -# Apt-sources -cat <<END > sources.list -deb http://ftp.uk.debian.org/debian/ jessie main contrib non-free -deb-src http://ftp.uk.debian.org/debian/ jessie main contrib non-free -END -apt-get update -apt-get upgrade -apt-get dist-upgrade -exit # Exit chroot -</pre> -<h3>Entering Debroid and enable a service</h3> -<p>This enters Debroid on your phone and starts the example service uptimed:</p> -<pre> -sh jessie.sh enter - -# Setup example serice uptimed -apt-get install uptimed -cat <<END > /etc/rc.debroid -export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH -service uptimed status &>/dev/null || service uptimed start -exit 0 -END - -chmod 0755 /etc/rc.debroid -exit # Exit chroot -exit # Exit adb shell -</pre> -<h3>Include to Android startup:</h3> -<p>I you want to start Debroid automatically every time when your phone starts, then do the following:</p> -<pre> -adb push data/local/userinit.sh /data/local/userinit.sh -adb shell -chmod +x /data/local/userinit.sh -exit -</pre> -<p>Reboot & test! Enjoy!</p> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>The fibonacci.pl.c Polyglot</title> - <link href="gemini://buetow.org/gemfeed/2014-03-24-the-fibonacci.pl.c-polyglot.gmi" /> - <id>gemini://buetow.org/gemfeed/2014-03-24-the-fibonacci.pl.c-polyglot.gmi</id> - <updated>2014-03-24T21:32:53+00:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>In computing, a polyglot is a computer program or script written in a valid form of multiple programming languages, which performs the same operations or output independent of the programming language used to compile or interpret it. .....to read on please visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>The fibonacci.pl.c Polyglot</h1> -<p class="quote"><i>Written by Paul Buetow 2014-03-24</i></p> -<p>In computing, a polyglot is a computer program or script written in a valid form of multiple programming languages, which performs the same operations or output independent of the programming language used to compile or interpret it</p> -<a class="textlink" href="https://en.wikipedia.org/wiki/Polyglot_(computing)">https://en.wikipedia.org/wiki/Polyglot_(computing)</a><br /> -<h2>The Fibonacci numbers</h2> -<p>For fun, I programmed my own Polyglot, which is both, valid Perl and C code. The interesting part about C is, that $ is a valid character to start variable names with:</p> -<pre> -#include <stdio.h> - -#define $arg function_argument -#define my int -#define sub int -#define BEGIN int main(void) - -my $arg; - -sub hello() { - printf("Hello, welcome to Perl-C!\n"); - printf("This program is both, valid C and Perl code!\n"); - printf("It calculates all fibonacci numbers from 0 to 9!\n\n"); - return 0; -} - -sub fibonacci() { - my $n = $arg; - - if ($n < 2) { - return $n; - } - - $arg = $n - 1; - my $fib1 = fibonacci(); - $arg = $n - 2; - my $fib2 = fibonacci(); - - return $fib1 + $fib2; -} - -BEGIN { - hello(); - my $i = 0; - - for ($i = 0; $i <= 10; ++$i) { - $arg = $i; - printf("fib(%d) = %d\n", $i, fibonacci()); - } - - return 0; -} -</pre> -<p>You can find the whole source code at GitHub:</p> -<a class="textlink" href="https://github.com/snonux/perl-c-fibonacci">https://github.com/snonux/perl-c-fibonacci</a><br /> -<h3>Let's run it with Perl:</h3> -<pre> -❯ perl fibonacci.pl.c -Hello, welcome to Perl-C! -This program is both, valid C and Perl code! -It calculates all fibonacci numbers from 0 to 9! - -fib(0) = 0 -fib(1) = 1 -fib(2) = 1 -fib(3) = 2 -fib(4) = 3 -fib(5) = 5 -fib(6) = 8 -fib(7) = 13 -fib(8) = 21 -fib(9) = 34 -fib(10) = 55 -</pre> -<h3>Let's compile it as C and run the binary:</h3> -<pre> -❯ gcc fibonacci.pl.c -o fibonacci -❯ ./fibonacci -Hello, welcome to Perl-C! -This program is both, valid C and Perl code! -It calculates all fibonacci numbers from 0 to 9! - -fib(0) = 0 -fib(1) = 1 -fib(2) = 1 -fib(3) = 2 -fib(4) = 3 -fib(5) = 5 -fib(6) = 8 -fib(7) = 13 -fib(8) = 21 -fib(9) = 34 -fib(10) = 55 -</pre> -<p>It's really fun to play with :-).</p> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>Perl Daemon (Service Framework)</title> - <link href="gemini://buetow.org/gemfeed/2011-05-07-perl-daemon-service-framework.gmi" /> - <id>gemini://buetow.org/gemfeed/2011-05-07-perl-daemon-service-framework.gmi</id> - <updated>2011-05-07T22:26:02+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>PerlDaemon is a minimal daemon for Linux and other Unix like operating systems programmed in Perl. It is a minimal but pretty functional and fairly generic service framework. This means that it does not do anything useful other than providing a framework for starting, stopping, configuring and logging. In order to do something a module (written in Perl) bust be provided.. .....to read on please visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>Perl Daemon (Service Framework)</h1> -<pre> - a'! _,,_ a'! _,,_ a'! _,,_ - \\_/ \ \\_/ \ \\_/ \.-, - \, /-( /'-,\, /-( /'-, \, /-( / - //\ //\\ //\ //\\ //\ //\\jrei -</pre> -<p class="quote"><i>Written by Paul Buetow 2011-05-07, last updated 2021-05-07</i></p> -<p>PerlDaemon is a minimal daemon for Linux and other Unix like operating systems programmed in Perl. It is a minimal but pretty functional and fairly generic service framework. This means that it does not do anything useful other than providing a framework for starting, stopping, configuring and logging. In order to do something useful, a module (written in Perl) must be provided.</p> -<h2>Features</h2> -<p>PerlDaemon supports:</p> -<ul> -<li>Automatic daemonizing</li> -<li>Logging</li> -<li>logrotation (via SIGHUP)</li> -<li>Clean shutdown support (SIGTERM)</li> -<li>Pid file support (incl. check on startup)</li> -<li>Easy to configure</li> -<li>Easy to extend</li> -<li>Multi instance support (just use a different directory for each instance).</li> -</ul> -<h2>Quick Guide</h2> -<pre> -# Starting - ./bin/perldaemon start (or shortcut ./control start) - -# Stopping - ./bin/perldaemon stop (or shortcut ./control stop) - -# Alternatively: Starting in foreground -./bin/perldaemon start daemon.daemonize=no (or shortcut ./control foreground) -</pre> -<p>To stop a daemon running in foreground mode "Ctrl+C" must be hit. To see more available startup options run "./control" without any argument.</p> -<h2>How to configure</h2> -<p>The daemon instance can be configured in "./conf/perldaemon.conf". If you want to change a property only once, it is also possible to specify it on command line (that then will take precedence over the config file). All available config properties can be viewed via "./control keys":</p> -<pre> -pb@titania:~/svn/utils/perldaemon/trunk$ ./control keys -# Path to the logfile -daemon.logfile=./log/perldaemon.log - -# The amount of seconds until the next event look takes place -daemon.loopinterval=1 - -# Path to the modules dir -daemon.modules.dir=./lib/PerlDaemonModules - -# Specifies either the daemon should run in daemon or foreground mode -daemon.daemonize=yes - -# Path to the pidfile -daemon.pidfile=./run/perldaemon.pid - -# Each module should run every runinterval seconds -daemon.modules.runinterval=3 - -# Path to the alive file (is touched every loopinterval seconds, usable to monitor) -daemon.alivefile=./run/perldaemon.alive - -# Specifies the working directory -daemon.wd=./ -</pre> -<h2>Example </h2> -<p>So let's start the daemon with a loop interval of 10 seconds:</p> -<pre> -$ ./control keys | grep daemon.loopinterval -daemon.loopinterval=1 -$ ./control keys daemon.loopinterval=10 | grep daemon.loopinterval -daemon.loopinterval=10 -$ ./control start daemon.loopinterval=10; sleep 10; tail -n 2 log/perldaemon.log -Starting daemon now... -Mon Jun 13 11:29:27 2011 (PID 2838): Triggering PerlDaemonModules::ExampleModule -(last triggered before 10.002106s; carry: 7.002106s; wanted interval: 3s) -Mon Jun 13 11:29:27 2011 (PID 2838): ExampleModule Test 2 -$ ./control stop -Stopping daemon now... -</pre> -<p>If you want to change that property forever either edit perldaemon.conf or do this:</p> -<pre> -$ ./control keys daemon.loopinterval=10 > new.conf; mv new.conf conf/perldaemon.conf -</pre> -<h2>HiRes event loop</h2> -<p>PerlDaemon uses `Time::HiRes` to make sure that all the events run in correct intervals. Each loop run a time carry value is recorded and added to the next loop run in order to catch up lost time.</p> -<h2>Writing your own modules</h2> -<h3>Example module</h3> -<p>This is one of the example modules you will find in the source code. It should be quite self-explanatory if you know Perl :-).</p> -<pre> -package PerlDaemonModules::ExampleModule; - -use strict; -use warnings; - -sub new ($$$) { - my ($class, $conf) = @_; - - my $self = bless { conf => $conf }, $class; - - # Store some private module stuff - $self->{counter} = 0; - - return $self; -} - -# Runs periodically in a loop (set interval in perldaemon.conf) -sub do ($) { - my $self = shift; - my $conf = $self->{conf}; - my $logger = $conf->{logger}; - - # Calculate some private module stuff - my $count = ++$self->{counter}; - - $logger->logmsg("ExampleModule Test $count"); -} - -1; -</pre> -<h3>Your own module</h3> -<p>Want to give it some better use? It's just as easy as:</p> -<pre> - cd ./lib/PerlDaemonModules/ - cp ExampleModule.pm YourModule.pm - vi YourModule.pm - cd - - ./bin/perldaemon restart (or shortcurt ./control restart) -</pre> -<p>Now watch `./log/perldaemon.log` closely. It is a good practice to test your modules in 'foreground mode' (see above how to do that).</p> -<p>BTW: You can install as many modules within the same instance as desired. But they are run in sequential order (in future they can also run in parallel using several threads or processes).</p> -<h2>May the source be with you</h2> -<p>You can find PerlDaemon (including the examples) at:</p> -<a class="textlink" href="https://github.com/snonux/perldaemon">https://github.com/snonux/perldaemon</a><br /> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>The Fype Programming Language</title> - <link href="gemini://buetow.org/gemfeed/2010-05-09-the-fype-programming-language.gmi" /> - <id>gemini://buetow.org/gemfeed/2010-05-09-the-fype-programming-language.gmi</id> - <updated>2010-05-09T12:48:29+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>Fype is an interpreted programming language created by me for learning and fun. The interpreter is written in C. It has been tested on FreeBSD and NetBSD and may also work on other Unix like operating systems such as Linux based ones. To be honest, besides learning and fun there is really no other use case of why Fype actually exists as many other programming languages are much faster and more powerful.. .....to read on please visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>The Fype Programming Language</h1> -<pre> - ____ _ __ - / / _|_ _ _ __ ___ _ _ ___ __ _| |__ / _|_ _ - / / |_| | | | '_ \ / _ \ | | | |/ _ \/ _` | '_ \ | |_| | | | - _ / /| _| |_| | |_) | __/ | |_| | __/ (_| | | | |_| _| |_| | -(_)_/ |_| \__, | .__/ \___| \__, |\___|\__,_|_| |_(_)_| \__, | - |___/|_| |___/ |___/ -</pre> -<p class="quote"><i>Written by Paul Buetow 2010-05-09, last updated 2021-05-05</i></p> -<p>Fype is an interpreted programming language created by me for learning and fun. The interpreter is written in C. It has been tested on FreeBSD and NetBSD and may also work on other Unix like operating systems such as Linux based ones. To be honest, besides learning and fun there is really no other use case of why Fype actually exists as many other programming languages are much faster and more powerful.</p> -<p>The Fype syntax is very simple and is using a maximum look ahead of 1 and a very easy top down parsing mechanism. Fype is parsing and interpreting its code simultaneously. This means, that syntax errors are only detected during program runtime. </p> -<p>Fype is a recursive acronym and means "Fype is For Your Program Execution" or "Fype is Free Yak Programmed for ELF". You could also say "It's not a hype - it's Fype!".</p> -<h2>Object oriented C style</h2> -<p>The Fype interpreter is written in an object oriented style of C. Each "main component" has its own .h and .c file. There is a struct type for each (most components at least) component which can be initialized using a "COMPONENT_new" function and destroyed using a "COMPONENT_delete" function. Method calls follow the same schema, e.g. "COMPONENT_METHODNAME". There is no such as class inheritance and polymorphism involved. </p> -<p>To give you an idea how it works here as an example is a snippet from the main Fype "class header":</p> -<pre> -typedef struct { - Tupel *p_tupel_argv; // Contains command line options - List *p_list_token; // Initial list of token - Hash *p_hash_syms; // Symbol table - char *c_basename; -} Fype; -</pre> -<p>And here is a snippet from the main Fype "class implementation":</p> -<pre> -Fype* -fype_new() { - Fype *p_fype = malloc(sizeof(Fype)); - - p_fype->p_hash_syms = hash_new(512); - p_fype->p_list_token = list_new(); - p_fype->p_tupel_argv = tupel_new(); - p_fype->c_basename = NULL; - - garbage_init(); - - return (p_fype); -} - -void -fype_delete(Fype *p_fype) { - argv_tupel_delete(p_fype->p_tupel_argv); - - hash_iterate(p_fype->p_hash_syms, symbol_cleanup_hash_syms_cb); - hash_delete(p_fype->p_hash_syms); - - list_iterate(p_fype->p_list_token, token_ref_down_cb); - list_delete(p_fype->p_list_token); - - if (p_fype->c_basename) - free(p_fype->c_basename); - - garbage_destroy(); -} - -int -fype_run(int i_argc, char **pc_argv) { - Fype *p_fype = fype_new(); - - // argv: Maintains command line options - argv_run(p_fype, i_argc, pc_argv); - - // scanner: Creates a list of token - scanner_run(p_fype); - - // interpret: Interpret the list of token - interpret_run(p_fype); - - fype_delete(p_fype); - - return (0); -} -</pre> -<h2>Data types</h2> -<p>Fype uses auto type conversion. However, if you want to know what's going on you may take a look at the following basic data types:</p> -<ul> -<li>integer - Specifies a number</li> -<li>double - Specifies a double precision number</li> -<li>string - Specifies a string</li> -<li>number - May be an integer or a double number</li> -<li>any- May be any type above</li> -<li>void - No type</li> -<li>identifier - It's a variable name or a procedure name or a function name</li> -</ul> -<p>There is no boolean type, but we can use the integer values 0 for false and 1 for true. There is support for explicit type casting too.</p> -<h2>Syntax</h2> -<h3>Comments</h3> -<p>Text from a # character until the end of the current line is considered being a comment. Multi line comments may start with an #* and with a *# anywhere. Exceptions are if those signs are inside of strings.</p> -<h3>Variables</h3> -<p>Variables can be defined with the "my" keyword (inspired by Perl :-). If you don't assign a value during declaration, then it's using the default integer value 0. Variables may be changed during program runtime. Variables may be deleted using the "undef" keyword! Example:</p> -<pre> -my foo = 1 + 2; -say foo; - -my bar = 12, baz = foo; -say 1 + bar; -say bar; - -my baz; -say baz; # Will print out 0 -</pre> -<p>You may use the "defined" keyword to check if an identifier has been defined or not:</p> -<pre> -ifnot defined foo { - say "No foo yet defined"; -} - -my foo = 1; - -if defined foo { - put "foo is defined and has the value "; - say foo; -} -</pre> -<h3>Synonyms</h3> -<p>Each variable can have as many synonyms as wished. A synonym is another name to access the content of a specific variable. Here is an example of how to use is:</p> -<pre> -my foo = "foo"; -my bar = \foo; -foo = "bar"; - -# The synonym variable should now also set to "bar" -assert "bar" == bar; -</pre> -<p>Synonyms can be used for all kind of identifiers. It's not limited to normal variables but can be also used for function and procedure names etc (more about functions and procedures later).</p> -<pre> -# Create a new procedure baz -proc baz { say "I am baz"; } - -# Make a synonym baz, and undefine baz -my bay = \baz; - -undef baz; - -# bay still has a reference of the original procedure baz -bay; # this prints aut "I am baz" -</pre> -<p>The "syms" keyword gives you the total number of synonyms pointing to a specific value:</p> -<pre> -my foo = 1; -say syms foo; # Prints 1 - -my baz = \foo; -say syms foo; # Prints 2 -say syms baz; # Prints 2 - -undef baz; -say syms foo; # Prints 1 -</pre> -<h2>Statements and expressions</h2> -<p>A Fype program is a list of statements. Each keyword, expression or function call is part of a statement. Each statement is ended with a semicolon. Example:</p> -<pre> -my bar = 3, foo = 1 + 2; -say foo; -exit foo - bar; -</pre> -<h3>Parenthesis</h3> -<p>All parenthesis for function arguments are optional. They help to make the code better readable. They also help to force precedence of expressions.</p> -<h3>Basic expressions</h3> -<p>Any "any" value holding a string will be automatically converted to an integer value.</p> -<pre> -(any) <any> + <any> -(any) <any> - <any> -(any) <any> * <any> -(any) <any> / <any> -(integer) <any> == <any> -(integer) <any> != <any> -(integer) <any> <= <any> -(integer) <any> gt <any> -(integer) <any> <> <any> -(integer) <any> gt <any> -(integer) not <any> -</pre> -<h3>Bitwise expressions</h3> -<pre> -(integer) <any> :< <any> -(integer) <any> :> <any> -(integer) <any> and <any> -(integer) <any> or <any> -(integer) <any> xor <any> -</pre> -<h3>Numeric expressions</h3> -<pre> -(number) neg <number> -</pre> -<p>... returns the negative value of "number":</p> -<pre> -(integer) no <integer> -</pre> -<p>... returns 1 if the argument is 0, otherwise it will return 0! If no argument is given, then 0 is returned!</p> -<pre> -(integer) yes <integer> -</pre> -<p>... always returns 1. The parameter is optional. Example:</p> -<pre> -# Prints out 1, because foo is not defined -if yes { say no defined foo; } -</pre> -<h2>Control statements</h2> -<p>Control statements available in Fype:</p> -<pre> -if <expression> { <statements> } -</pre> -<p>... runs the statements if the expression evaluates to a true value.</p> -<pre> -ifnot <expression> { <statements> } -</pre> -<p>... runs the statements if the expression evaluates to a false value.</p> -<pre> -while <expression> { <statements> } -</pre> -<p>... runs the statements as long as the expression evaluates to a true value.</p> -<pre> -until <expression> { <statements> } -</pre> -<p>... runs the statements as long as the expression evaluates to a false value.</p> -<h2>Scopes</h2> -<p>A new scope starts with an { and ends with an }. An exception is a procedure, which does not use its own scope (see later in this manual). Control statements and functions support scopes. The "scope" function prints out all available symbols at the current scope. Here is a small example:</p> -<pre> -my foo = 1; - -{ - # Prints out 1 - put defined foo; - { - my bar = 2; - - # Prints out 1 - put defined bar; - - # Prints out all available symbols at this - # point to stdout. Those are: bar and foo - scope; - } - - # Prints out 0 - put defined bar; - - my baz = 3; -} - -# Prints out 0 -say defined bar; -</pre> -<p>Another example including an actual output:</p> -<pre> -./fype -e ’my global; func foo { my var4; func bar { my var2, var3; func baz { my var1; scope; } baz; } bar; } foo;’ -Scopes: -Scope stack size: 3 -Global symbols: -SYM_VARIABLE: global (id=00034, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1) -SYM_FUNCTION: foo -Local symbols: -SYM_VARIABLE: var1 (id=00038, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1) -1 level(s) up: -SYM_VARIABLE: var2 (id=00036, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1) -SYM_VARIABLE: var3 (id=00037, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1) -SYM_FUNCTION: baz -2 level(s) up: -SYM_VARIABLE: var4 (id=00035, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1) -SYM_FUNCTION: bar -</pre> -<h2>Definedness </h2> -<pre> -(integer) defined <identifier> -</pre> -<p>... returns 1 if "identifier" has been defined. Returns 0 otherwise.</p> -<pre> -(integer) undef <identifier> -</pre> -<p>... tries to undefine/delete the "identifier". Returns 1 if it succeeded, otherwise 0 is returned.</p> -<h2>System </h2> -<p>These are some system and interpreter specific built-in functions supported:</p> -<pre> -(void) end -</pre> -<p>... exits the program with the exit status of 0.</p> -<pre> -(void) exit <integer> -</pre> -<p>... exits the program with the specified exit status.</p> -<pre> -(integer) fork -</pre> -<p>... forks a subprocess. It returns 0 for the child process and the pid of the child process otherwise! Example:</p> -<pre> -my pid = fork; - -if pid { - put "I am the parent process; child has the pid "; - say pid; - -} ifnot pid { - say "I am the child process"; -} -</pre> -<p>To execute the garbage collector do:</p> -<pre> -(integer) gc -</pre> -<p>It returns the number of items freed! You may wonder why most of the time it will return a value of 0! Fype tries to free not needed memory ASAP. This may change in future versions in order to gain faster execution speed!</p> -<h3>I/O </h3> -<pre> -(any) put <any> -</pre> -<p>... prints out the argument</p> -<pre> -(any) say <any> -</pre> -<p>is the same as put, but also includes an ending newline.</p> -<pre> -(void) ln -</pre> -<p>... just prints a newline.</p> -<h2>Procedures and functions</h2> -<h3>Procedures</h3> -<p>A procedure can be defined with the "proc" keyword and deleted with the "undef" keyword. A procedure does not return any value and does not support parameter passing. It's using already defined variables (e.g. global variables). A procedure does not have its own namespace. It's using the calling namespace. It is possible to define new variables inside of a procedure in the current namespace.</p> -<pre> -proc foo { - say 1 + a * 3 + b; - my c = 6; -} - -my a = 2, b = 4; - -foo; # Run the procedure. Print out "11\n" -say c; # Print out "6\n"; -</pre> -<h3>Nested procedures</h3> -<p>It's possible to define procedures inside of procedures. Since procedures don't have its own scope, nested procedures will be available to the current scope as soon as the main procedure has run the first time. You may use the "defined" keyword in order to check if a procedure has been defined or not.</p> -<pre> -proc foo { - say "I am foo"; - - undef bar; - proc bar { - say "I am bar"; - } -} - -# Here bar would produce an error because -# the proc is not yet defined! -# bar; - -foo; # Here the procedure foo will define the procedure bar! -bar; # Now the procedure bar is defined! -foo; # Here the procedure foo will redefine bar again! -</pre> -<h3>Functions</h3> -<p>A function can be defined with the "func" keyword and deleted with the "undef" keyword. Function do not yet return values and do not yet supports parameter passing. It's using local (lexical scoped) variables. If a certain variable does not exist, when It's using already defined variables (e.g. one scope above). </p> -<pre> -func foo { - say 1 + a * 3 + b; - my c = 6; -} - -my a = 2, b = 4; - -foo; # Run the procedure. Print out "11\n" -say c; # Will produce an error, because c is out of scoped! -</pre> -<h3>Nested functions</h3> -<p>Nested functions work the same way the nested procedures work, with the exception that nested functions will not be available anymore after the function has been left!</p> -<pre> -func foo { - func bar { - say "Hello i am nested"; - } - - bar; # Calling nested -} - -foo; -bar; # Will produce an error, because bar is out of scope! -</pre> -<h2>Arrays</h2> -<p>Some progress on arrays has been made too. The following example creates a multi dimensional array "foo". Its first element is the return value of the func which is "bar". The fourth value is a string ”3” converted to a double number. The last element is an anonymous array which itself contains another anonymous array as its last element:</p> -<pre> -func bar { say ”bar” } -my foo = [bar, 1, 4/2, double ”3”, [”A”, [”BA”, ”BB”]]]; -say foo; -</pre> -<p>It produces the following output:</p> -<pre> -% ./fype arrays.fy -bar -01 -2 -3.000000 -A -BA -BB -</pre> -<h2>Fancy stuff</h2> -<p>Fancy stuff like OOP or Unicode or threading is not planed. But fancy stuff like function pointers and closures may be considered.:) </p> -<h2>May the source be with you</h2> -<p>You can find all of this on the GitHub page. There is also an "examples" folders containing some Fype scripts!</p> -<a class="textlink" href="https://github.com/snonux/fype">https://github.com/snonux/fype</a><br /> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>Standard ML and Haskell</title> - <link href="gemini://buetow.org/gemfeed/2010-04-09-standard-ml-and-haskell.gmi" /> - <id>gemini://buetow.org/gemfeed/2010-04-09-standard-ml-and-haskell.gmi</id> - <updated>2010-04-09T22:57:36+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>I am currently looking into the functional programming language Standard ML (aka SML). The purpose is to refresh my functional programming skills and to learn something new too. Since I already know a little Haskell, could I do not help myself and I implemented the same exercises in Haskell too.. .....to read on please visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>Standard ML and Haskell</h1> -<p class="quote"><i>Written by Paul Buetow 2010-04-09</i></p> -<p>I am currently looking into the functional programming language Standard ML (aka SML). The purpose is to refresh my functional programming skills and to learn something new too. Since I already know a little Haskell, could I do not help myself and I implemented the same exercises in Haskell too.</p> -<p>As you will see, SML and Haskell are very similar (at least when it comes to the basics). However, the syntax of Haskell is a bit more "advanced". Haskell utilizes fewer keywords (e.g. no val, end, fun, fn ...). Haskell also allows to explicitly write down the function types. What I have been missing in SML so far is the so-called pattern guards. Although this is a very superficial comparison for now, so far I like Haskell more than SML. Nevertheless, I thought it would be fun to demonstrate a few simple functions of both languages to show off the similarities. </p> -<p>Haskell is also a "pure functional" programming language, whereas SML also makes explicit use of imperative concepts. I am by far not a specialist in either of these languages but here are a few functions implemented in both, SML and Haskell:</p> -<h2>Defining a multi data type</h2> -<p>Standard ML:</p> -<pre> -datatype ’a multi - = EMPTY - | ELEM of ’a - | UNION of ’a multi * ’a multi -</pre> -<p>Haskell:</p> -<pre> -data (Eq a) => Multi a - = Empty - | Elem a - | Union (Multi a) (Multi a) - deriving Show -</pre> -<h2>Processing a multi</h2> -<p>Standard ML:</p> -<pre> -fun number (EMPTY) _ = 0 - | number (ELEM x) w = if x = w then 1 else 0 - | number (UNION (x,y)) w = (number x w) + (number y w) -fun test_number w = number (UNION (EMPTY, \ - UNION (ELEM 4, UNION (ELEM 6, \ - UNION (UNION (ELEM 4, ELEM 4), EMPTY))))) w -</pre> -<p>Haskell:</p> -<pre> -number Empty _ = 0 -number (Elem x) w = if x == w then 1 else 0 -test_number w = number (Union Empty \ - (Union (Elem 4) (Union (Elem 6) \ - (Union (Union (Elem 4) (Elem 4)) Empty)))) w -</pre> -<h2>Simplify function</h2> -<p>Standard ML:</p> -<pre> -fun simplify (UNION (x,y)) = - let fun is_empty (EMPTY) = true | is_empty _ = false - val x’ = simplify x - val y’ = simplify y - in if (is_empty x’) andalso (is_empty y’) - then EMPTY - else if (is_empty x’) - then y’ - else if (is_empty y’) - then x’ - else UNION (x’, y’) - end - | simplify x = x -</pre> -<p>Haskell:</p> -<pre> -simplify (Union x y) - | (isEmpty x’) && (isEmpty y’) = Empty - | isEmpty x’ = y’ - | isEmpty y’ = x’ - | otherwise = Union x’ y’ - where - isEmpty Empty = True - isEmpty _ = False - x’ = simplify x - y’ = simplify y -simplify x = x -</pre> -<h2>Delete all</h2> -<p>Standard ML:</p> -<pre> -fun delete_all m w = - let fun delete_all’ (ELEM x) = if x = w then EMPTY else ELEM x - | delete_all’ (UNION (x,y)) = UNION (delete_all’ x, delete_all’ y) - | delete_all’ x = x - in simplify (delete_all’ m) - end -</pre> -<p>Haskell:</p> -<pre> -delete_all m w = simplify (delete_all’ m) - where - delete_all’ (Elem x) = if x == w then Empty else Elem x - delete_all’ (Union x y) = Union (delete_all’ x) (delete_all’ y) - delete_all’ x = x -</pre> -<h2>Delete one</h2> -<p>Standard ML:</p> -<pre> -fun delete_one m w = - let fun delete_one’ (UNION (x,y)) = - let val (x’, deleted) = delete_one’ x - in if deleted - then (UNION (x’, y), deleted) - else let val (y’, deleted) = delete_one’ y - in (UNION (x, y’), deleted) - end - end - | delete_one’ (ELEM x) = - if x = w then (EMPTY, true) else (ELEM x, false) - | delete_one’ x = (x, false) - val (m’, _) = delete_one’ m - in simplify m’ - end -</pre> -<p>Haskell:</p> -<pre> -delete_one m w = do - let (m’, _) = delete_one’ m - simplify m’ - where - delete_one’ (Union x y) = - let (x’, deleted) = delete_one’ x - in if deleted - then (Union x’ y, deleted) - else let (y’, deleted) = delete_one’ y - in (Union x y’, deleted) - delete_one’ (Elem x) = - if x == w then (Empty, True) else (Elem x, False) - delete_one’ x = (x, False) -</pre> -<h2>Higher order functions</h2> -<p>The first line is always the SML code, the second line always the Haskell variant:</p> -<pre> -fun make_map_fn f1 = fn (x,y) => f1 x :: y -make_map_fn f1 = \x y -> f1 x : y - -fun make_filter_fn f1 = fn (x,y) => if f1 x then x :: y else y -make_filter_fn f1 = \x y -> if f1 then x : y else y - -fun my_map f l = foldr (make_map_fn f) [] l -my_map f l = foldr (make_map_fn f) [] l - -fun my_filter f l = foldr (make_filter_fn f) [] l -my_filter f l = foldr (make_filter_fn f) [] l -</pre> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> - <entry> - <title>Perl Poetry</title> - <link href="gemini://buetow.org/gemfeed/2008-06-26-perl-poetry.gmi" /> - <id>gemini://buetow.org/gemfeed/2008-06-26-perl-poetry.gmi</id> - <updated>2008-06-26T21:43:51+01:00</updated> - <author> - <name>Paul Buetow</name> - <email>comments@mx.buetow.org</email> - </author> - <summary>Here are some Perl Poems I wrote. They don't do anything useful when you run them but they don't produce a compiler error either. They only exists for fun and demonstrate what you can do with Perl syntax.. .....to read on please visit my site.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1>Perl Poetry</h1> -<pre> - '\|/' * --- * ----- - /|\ ____ - ' | ' {_ o^> * - : -_ /) - : ( ( .-''`'. - . \ \ / \ - . \ \ / \ - \ `-' `'. - \ . ' / `. - \ ( \ ) ( .') - ,, t '. | / | ( - '|``_/^\___ '| |`'-..-'| ( () -_~~|~/_|_|__/|~~~~~~~ | / ~~~~~ | | ~~~~~~~~ - -_ |L[|]L|/ | |\ MJP ) ) - ( |( / /| - ~~ ~ ~ ~~~~ | /\\ / /| | - || \\ _/ / | | - ~ ~ ~~~ _|| (_/ (___)_| |Nov291999 - (__) (____) -</pre> -<p class="quote"><i>Written by Paul Buetow 2008-06-26, last updated 2021-05-04</i></p> -<p>Here are some Perl Poems I wrote. They don't do anything useful when you run them, but they don't produce a compiler error either. They only exist for fun and demonstrate what you can do with Perl syntax.</p> -<p>Wikipedia: "Perl poetry is the practice of writing poems that can be compiled as legal Perl code, for example the piece known as Black Perl. Perl poetry is made possible by the large number of English words that are used in the Perl language. New poems are regularly submitted to the community at PerlMonks."</p> -<a class="textlink" href="https://en.wikipedia.org/wiki/Perl">https://en.wikipedia.org/wiki/Perl</a><br /> -<h2>math.pl</h2> -<pre> -#!/usr/bin/perl - -# (C) 2006 by Paul C. Buetow (http://paul.buetow.org) - -goto library for study $math; -BEGIN { s/earching/ books/ -and read $them, $at, $the } library: - -our $topics, cos and tan, -require strict; import { of, tied $patience }; - -do { int'egrate'; sub trade; }; -do { exp'onentize' and abs'olutize' }; -study and study and study and study; - -foreach $topic ({of, math}) { -you, m/ay /go, to, limits } - -do { not qw/erk / unless $success -and m/ove /o;$n and study }; - -do { int'egrate'; sub trade; }; -do { exp'onentize' and abs'olutize' }; -study and study and study and study; - -grep /all/, exp'onents' and cos'inuses'; -/seek results/ for @all, log'4rithms'; - -'you' =~ m/ay /go, not home -unless each %book ne#ars -$completion; - -do { int'egrate'; sub trade; }; -do { exp'onentize' and abs'olutize' }; - -#at -home: //ig,'nore', time and sleep $very =~ s/tr/on/g; -__END__ - -</pre> -<h2>christmas.pl</h2> -<pre> -#!/usr/bin/perl - -# (C) 2006 by Paul C. Buetow (http://paul.buetow.org) - -Christmas:{time;#!!! - -Children: do tell $wishes; - -Santa: for $each (@children) { -BEGIN { read $each, $their, wishes and study them; use Memoize#ing - -} use constant gift, 'wrapping'; -package Gifts; pack $each, gift and bless $each and goto deliver -or do import if not local $available,!!! HO, HO, HO; - -redo Santa, pipe $gifts, to_childs; -redo Santa and do return if last one, is, delivered; - -deliver: gift and require diagnostics if our $gifts ,not break; -do{ use NEXT; time; tied $gifts} if broken and dump the, broken, ones; -The_children: sleep and wait for (each %gift) and try { to => untie $gifts }; - -redo Santa, pipe $gifts, to_childs; -redo Santa and do return if last one, is, delivered; - -The_christmas_tree: formline s/ /childrens/, $gifts; -alarm and warn if not exists $Christmas{ tree}, @t, $ENV{HOME}; -write <<EMail - to the parents to buy a new christmas tree!!!!111 - and send the -EMail -;wait and redo deliver until defined local $tree; - -redo Santa, pipe $gifts, to_childs; -redo Santa and do return if last one, is, delivered ;} - -END {} our $mission and do sleep until next Christmas ;} - -__END__ - -This is perl, v5.8.8 built for i386-freebsd-64int -</pre> -<h2>shopping.pl</h2> -<pre> -#!/usr/bin/perl - -# (C) 2007 by Paul C. Buetow (http://paul.buetow.org) - -BEGIN{} goto mall for $shopping; - -m/y/; mall: seek$s, cool products(), { to => $sell }; -for $their (@business) { to:; earn:; a:; lot:; of:; money: } - -do not goto home and exit mall if exists $new{product}; -foreach $of (q(uality rich products)){} package products; - -our $news; do tell cool products() and do{ sub#tract -cool{ $products and shift @the, @bad, @ones; - -do bless [q(uality)], $products -and return not undef $stuff if not (local $available) }}; - -do { study and study and study for cool products() } -and do { seek $all, cool products(), { to => $buy } }; - -do { write $them, $down } and do { order: foreach (@case) { package s } }; -goto home if not exists $more{money} or die q(uerying) ;for( @money){}; - -at:;home: do { END{} and:; rest:; a:; bit: exit $shopping } -and sleep until unpack$ing, cool products(); - -__END__ -This is perl, v5.8.8 built for i386-freebsd-64int -</pre> -<h2>More...</h2> -<p>Did you like what you saw? Have a look at Github to see my other poems too:</p> -<a class="textlink" href="https://github.com/snonux/perl-poetry">https://github.com/snonux/perl-poetry</a><br /> -<p>E-Mail me your thoughts at comments@mx.buetow.org!</p> - </div> - </content> - </entry> -</feed> diff --git a/content/gemtext/gemfeed/index.gmi b/content/gemtext/gemfeed/index.gmi deleted file mode 100644 index 8880df3e..00000000 --- a/content/gemtext/gemfeed/index.gmi +++ /dev/null @@ -1,19 +0,0 @@ -# buetow.org's Gemfeed - -## Having fun with computers! - -=> ./2021-05-16-personal-bash-coding-style-guide.gmi 2021-05-16 - Personal Bash coding style guide -=> ./2021-04-24-welcome-to-the-geminispace.gmi 2021-04-24 - Welcome to the Geminispace -=> ./2021-04-22-dtail-the-distributed-log-tail-program.gmi 2021-04-22 - DTail - The distributed log tail program -=> ./2018-06-01-realistic-load-testing-with-ioriot-for-linux.gmi 2018-06-01 - Realistic load testing with I/O Riot for Linux -=> ./2016-11-20-methods-in-c.gmi 2016-11-20 - Methods in C -=> ./2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi 2016-05-22 - Spinning up my own authoritative DNS servers -=> ./2016-04-16-offsite-backup-with-zfs-part2.gmi 2016-04-16 - Offsite backup with ZFS (Part 2) -=> ./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi 2016-04-09 - Jails and ZFS with Puppet on FreeBSD -=> ./2016-04-03-offsite-backup-with-zfs.gmi 2016-04-03 - Offsite backup with ZFS -=> ./2015-12-05-run-debian-on-your-phone-with-debroid.gmi 2015-12-05 - Run Debian on your phone with Debroid -=> ./2014-03-24-the-fibonacci.pl.c-polyglot.gmi 2014-03-24 - The fibonacci.pl.c Polyglot -=> ./2011-05-07-perl-daemon-service-framework.gmi 2011-05-07 - Perl Daemon (Service Framework) -=> ./2010-05-09-the-fype-programming-language.gmi 2010-05-09 - The Fype Programming Language -=> ./2010-04-09-standard-ml-and-haskell.gmi 2010-04-09 - Standard ML and Haskell -=> ./2008-06-26-perl-poetry.gmi 2008-06-26 - Perl Poetry diff --git a/content/gemtext/index.gmi b/content/gemtext/index.gmi deleted file mode 100644 index 165ebcb3..00000000 --- a/content/gemtext/index.gmi +++ /dev/null @@ -1,71 +0,0 @@ -# buetow.org - -``` - ,---------------------------, - | /---------------------\ | - | | | | - | | Paul's | | - | | personal | | - | | capsule | | - | | | | - | \_____________________/ | - |___________________________| - ,---\_____ [] _______/------, - / /______________\ /| - /___________________________________ / | ___ - | | | ) - | _ _ _ [-------] | | ( - | o o o TURBO [-------] | / _)_ - |__________________________________ |/ / / - /-------------------------------------/| ( )/ - /-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/ / -/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/ / -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -``` - -## Why does this site look so old school? - -If you reach this site via the modern web, please read this: - -=> ./gemfeed/2021-04-24-welcome-to-the-geminispace.gmi Welcome to the Geminispace - -## Introduction - -My name is Paul Buetow and this is my personal internet site. You can call me a Linux/*BSD enthusiast and hobbyist. Although I also have many other interests, you will encounter mostly (if not only) technical content on this site. - -I have published some Open-Source software, you will find some reference to it on this site or on my GitHub page(s). I also read a lot of tech newsletters and blogs. I re-share the most interesting ones on my social media feeds. You can find links to my GitHub pages and to my social media accounts on my contact information page: - -=> ./contact-information.gmi Contact information - -I have also compiled a list of resources which made an impact on me: - -=> ./resources.gmi List of resources - -## Personal blog - -English is not my mother tongue. So please ignore any errors you might encounter. - -### Stay updated - -=> ./gemfeed/atom.xml Subscribe to this blog's Atom feed -=> ./gemfeed/index.gmi Subscribe to this blog's Gemfeed - -### Posts - -I have switched blog software multiple times. I might be back filling some of the older articles here. So please don't wonder when suddenly very old posts appear here. - -=> ./gemfeed/2021-05-16-personal-bash-coding-style-guide.gmi 2021-05-16 - Personal Bash coding style guide -=> ./gemfeed/2021-04-24-welcome-to-the-geminispace.gmi 2021-04-24 - Welcome to the Geminispace -=> ./gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi 2021-04-22 - DTail - The distributed log tail program -=> ./gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux.gmi 2018-06-01 - Realistic load testing with I/O Riot for Linux -=> ./gemfeed/2016-11-20-methods-in-c.gmi 2016-11-20 - Methods in C -=> ./gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi 2016-05-22 - Spinning up my own authoritative DNS servers -=> ./gemfeed/2016-04-16-offsite-backup-with-zfs-part2.gmi 2016-04-16 - Offsite backup with ZFS (Part 2) -=> ./gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi 2016-04-09 - Jails and ZFS with Puppet on FreeBSD -=> ./gemfeed/2016-04-03-offsite-backup-with-zfs.gmi 2016-04-03 - Offsite backup with ZFS -=> ./gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid.gmi 2015-12-05 - Run Debian on your phone with Debroid -=> ./gemfeed/2014-03-24-the-fibonacci.pl.c-polyglot.gmi 2014-03-24 - The fibonacci.pl.c Polyglot -=> ./gemfeed/2011-05-07-perl-daemon-service-framework.gmi 2011-05-07 - Perl Daemon (Service Framework) -=> ./gemfeed/2010-05-09-the-fype-programming-language.gmi 2010-05-09 - The Fype Programming Language -=> ./gemfeed/2010-04-09-standard-ml-and-haskell.gmi 2010-04-09 - Standard ML and Haskell -=> ./gemfeed/2008-06-26-perl-poetry.gmi 2008-06-26 - Perl Poetry diff --git a/content/gemtext/resources.gmi b/content/gemtext/resources.gmi deleted file mode 100644 index 77165b64..00000000 --- a/content/gemtext/resources.gmi +++ /dev/null @@ -1,118 +0,0 @@ -# Resources - -This is a list of resources I found useful. I am not an expert in all of these topics but all the resources listed here made an impact on me. I've read some of the books quite a long time ago, so there might be newer editions out there already and I might need to refresh some of the knowledge. - -The list may not be exhaustive but I will be adding more in the future. I strongly believe that educating yourself further is one of the most important things you should do in order to advance. The lists are in random order and reshuffled every time (via *sort -R*) when updates are made. - -You won't find any links on this site because over time the links will break. Please use your favorite search engine when you are interested in one of the resources... - -``` - .--. .---. .-. - .---|--| .-. | A | .---. |~| .--. -.--|===|Ch|---|_|--.__| S |--|:::| |~|-==-|==|---. -|%%|NT2|oc|===| |~~|%%| C |--| |_|~|CATS| |___|-. -| | |ah|===| |==| | I | |:::|=| | |GB|---|=| -| | |ol| |_|__| | I |__| | | | | |___| | -|~~|===|--|===|~|~~|%%|~~~|--|:::|=|~|----|==|---|=| -^--^---'--^---^-^--^--^---'--^---^-^-^-==-^--^---^-'hjw -``` - -## Technical books - -* C++ Programming Language; Bjarne Stroustrup; (I've to admit that was a long time ago I've read this book) -* Learn You Some Erlang for Great Good; Fred Herbert; No Starch Press -* Pro Git; Scott Chacon, Ben Straub; Apress -* Systemprogrammierung in Go; Frank Müller; dpunkt -* DNS and BIND; Cricket Liu; O'Reilly -* Concurrency in Go; Katherine Cox-Buday; O'Reilly -* Modern Perl; Chromatic ; Onyx Neon Press -* Java ist auch eine Insel; Christian Ullenboom; -* Think Raku (aka Think Perl 6); Laurent Rosenfeld, Allen B. Downey; O'Reilly -* Advanced Bash-Scripting Guide; Not a book by per-se but could be -* Site Reliability Engineering; How Google runs production systems; O'Reilly -* Systems Performance Tuning; Gian-Paolo D. Musumeci and others...; O'Reilly -* The Practise of System and Network Administration; Thomas A. Limoncelli, Christina J. Hogan, Strata R. Chalup; Addison-Wesley Professional -* Clusterbau mit Linux-HA; Michael Schwartzkopff; O'Reilly -* Object-Oriented Programming with ANSI-C; Axel-Tobias Schreiner -* Programming Perl aka "The Camel Book"; Tom Christiansen, brian d foy, Larry Wall & Jon Orwant; O'Reilly -* Higher Order Perl; Mark Dominus; Morgan Kaufmann -* The Docker Book; James Turnbull; Kindle -* Developing Games in Java; David Brackeen and others...; New Riders -* Effective awk programming; Arnold Robbins; O'Reilly -* Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press -* Funktionale Programmierung; Peter Pepper; Springer -* Pro Puppet; James Turnbull, Jeffrey McCune; Apress -* The Go Programming Language; Alan A. A. Donovan; Addison-Wesley Professional -* 21st Century C: C Tips from the New School; Ben Klemens; O'Reilly -* Distributed Systems: Principles and Paradigms; Andrew S. Tanenbaum; Pearson - -## Technical bibles - -I didn't read them from the beginning to the end, but I am using them to look up things. - -* The Linux Programming Interface; Michael Kerrisk; No Starch Press -* Understanding the Linux Kernel; Daniel P. Bovet, Marco Cesati; O'Reilly -* Algorithms; Robert Sedgewick, Kevin Wayne; Addison Wesley - -## Self-development and soft-skills books - -* Atomic Habits; James Clear; Random House Business -* The Complete Software Developer's Career Guide; John Sonmez; Unabridged Audiobook -* Eat That Frog!; Brian Tracy; Hodder Paperbacks -* Time Management for System Administrators; Thomas A. Limoncelli; O'Reilly -* Digital Minimalism; Cal Newport; Portofolio Penguin -* Stop starting, start finishing; Arne Roock; Lean-Kanban University -* Ultralearning; Scott Young; Thorsons -* The Joy of Missing Out; Christina Crook; New Society Publishers -* Soft Skills; John Sommez; Manning Publications -* So Good They Can't Ignore You; Cal Newport; Business Plus -* The Bullet Journal Method; Ryder Carroll; Fourth Estate -* Psycho-Cybernetics; Maxwell Maltz; Perigee Books -* Who Moved My Cheese?; Dr. Spencer Johnson; Vermilion -* The Off Switch; Mark Cropley; Virgin Books -* The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books -* Deep Work; Cal Newport; Piatkus -* The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon & Schuster UK -* The Power of Now; Eckhard Tolle; Yellow Kite - -## Technical video lectures and courses - -Some of these were in-person with exams, others were online learning lectures only. - -* Linux Security and Isolation APIs Training; Michael Kerrisk; 3 day on-site training -* MySQL Deep Dive Workshop; 2 day on-site training -* Protocol buffers; O'Reilly Online -* Algorithms Video Lectures; Robert Sedgewick; O'Reilly Online -* Red Hat Certified System Administrator; Course + certification (Although I had the option I decided not to take the next course as it is more effective to self learn what I need) -* Scripting Vim; Damian Conway; O'Reilly Online -* The Ultimate Kubernetes Bootcamp; School of Devops; O'Reilly Online -* Ultimate Go Programming; Bill Kennedy; O'Reilly Online -* Structure and Interpretation of Computer Programs; Harold Abelson and more...; -* F5 Loadbalancers Training; 2 day on-site training; F5, Inc. -* Apache Tomcat Best Practises; 3 day on-site training -* Functional programming lecture; Remote University of Hagen - -## Fiction and more books - -Many fiction and non-fiction books I read are not listed here. This site mostly includes resources which made an impact on me regarding the work I do only and not on my personal life. Do you recommend a good Science Fiction Novel? E-Mail me; I can also provide my own recommendations! :-) - -## Formal education - -I have met many self-taught IT professionals I highly respect. In my own opinion a formal degree does not automatically qualify a person for a certain job. It is more about how you educate yourself further *after* formal education. The pragmatic way of thinking and getting things done do not require a college or university degree. - -However, I still believe a degree in Computer Science helps to achieve a good understanding of all the theory involved which you would have never learned about otherwise. Isn't it cool to understand how compiler work under the hood (automata theory) even if in your current position you are not required to hack the compiler? You could apply the same theory for other things too. This was just *one* example. - -* One year Student exchange programme in OH, USA -* German School Majors (Abitur), focus areas: German and Mathematics -* Half year internship as a C/C++ programmer in Sofia, Bulgaria -* Graduaded from University as Diplom-Inform. (FH) at the Aachen University of Applied Sciences, Germany - -My diploma thesis "Object oriented development of a GUI based tool for event based simulation of distributed systems" can be found at: - -=> https://github.com/snonux/vs-sim - -I was one of the last students to whom was handed out an "old fashioned" German Diploma degree before the University switched to the international Bachelor and Master versions. To give you an idea: The "Diplom-Inform. (FH)" means literally translated "Diploma in Informatics from a University of Applied Sciences (FH: Fachhochschule)". Going after the international student credit score it is settled between a Bachelor of Computer Science and a Master of Computer Science degree. - -Colleges and Universities are very expensive in many countries. Come to Germany, the first college degree is for free (if you finish within a certain deadline!) - -=> ./ Go back to the main site |
