diff options
| author | Paul Buetow <git@mx.buetow.org> | 2021-05-02 16:41:28 +0100 |
|---|---|---|
| committer | Paul Buetow <git@mx.buetow.org> | 2021-05-21 05:11:04 +0100 |
| commit | 7bcd33dba38209753e441217536cc9bde1929f9a (patch) | |
| tree | 195257c723124e41539d6222a11b6184b431ed0a /content/gemtext | |
| parent | 8c2fce29739692816ad67eaa315e30db9316c129 (diff) | |
Use an AI to correct some of the grammar
Diffstat (limited to 'content/gemtext')
8 files changed, 32 insertions, 32 deletions
diff --git a/content/gemtext/gemfeed/2016-04-03-offsite-backup-with-zfs.gmi b/content/gemtext/gemfeed/2016-04-03-offsite-backup-with-zfs.gmi index c03b2637..314bea84 100644 --- a/content/gemtext/gemfeed/2016-04-03-offsite-backup-with-zfs.gmi +++ b/content/gemtext/gemfeed/2016-04-03-offsite-backup-with-zfs.gmi @@ -19,24 +19,24 @@ ## Please don't lose all my pictures again! -When it comes to data storage and potential data loss I am a paranoid person. It is not just due to my job but also due to a personal experience I encountered over 10 years ago: A single drive failure and loss of all my data (pictures, music, ....). +When it comes to data storage and potential data loss I am a paranoid person. It is not just due to my job but also due to a personal experience I encountered over 10 years ago: A single drive failure and loss of all my data (pictures, music, ....). -A little bit about my personal infrastructure: I am running my own (mostly FreeBSD based) root servers (across several countries: Two in Germany, one in Canada, one in Bulgaria) which store all my online data (E-Mail and my Git repositories). I am syncing incremental (and encrypted) ZFS snapshots between these servers forth and back so either data could be recovered from the other server. +A little about my personal infrastructure: I am running my own (mostly FreeBSD based) root servers (across several countries: Two in Germany, one in Canada, one in Bulgaria) which store all my online data (E-Mail and my Git repositories). I am syncing incremental (and encrypted) ZFS snapshots between these servers forth and back so either data could be recovered from the other server. ## Local storage box for offline data -Also, I am operating a local server (an HP MicroServer) at home in my apartment. Full snapshots of all ZFS volumes are pulled from the "online" servers to the local server every other week and the incremental ZFS snapshots every day. That local server has a ZFS ZMIRROR with 3 disks configured (local tripple redundancy). I keep up to half a year worth of ZFS snapshots of all volumes. That local server also contains all my offline data such as pictures, private documents, videos, books, various other backups, etc. +Also, I am operating a local server (an HP MicroServer) at home in my apartment. Full snapshots of all ZFS volumes are pulled from the "online" servers to the local server every other week and the incremental ZFS snapshots every day. That local server has a ZFS ZMIRROR with 3 disks configured (local triple redundancy). I keep up to half a year worth of ZFS snapshots of all volumes. That local server also contains all my offline data such as pictures, private documents, videos, books, various other backups, etc. Once weekly all the data of that local server is copied to two external USB drives as a backup (without the historic snapshots). For simplicity these USB drives are not formatted with ZFS but with good old UFS. This gives me a chance to recover from a (potential) ZFS disaster. ZFS is a complex thing. Sometimes it is good not to trust complex things! ## Storing it at my apartment is not enough -Now I am thinking about a offsite backup of all this local data. The problem is, that all the data remains on a single physical location: My local MicroServer. What happens when the house burns or someone steals my server including the internal disks and the attached USB drives? My first thought was to backup everything to the "cloud". The major issue here is however the limited amount of available upload bandwidth (only 1MBit/s). +Now I am thinking about an offsite backup of all this local data. The problem is, that all the data remains on a single physical location: My local MicroServer. What happens when the house burns or someone steals my server including the internal disks and the attached USB drives? My first thought was to back up everything to the "cloud". The major issue here is however the limited amount of available upload bandwidth (only 1MBit/s). -The solution is adding another USB drive (2TB) with an encryption container (GELI) and a ZFS pool on it. The GELI encryption requires a secret key and a secret passphrase. I am updating the data to that drive once every 3 months (Google Calendar is reminding me doing it) and afterwards I am keeping that drive at a secret location outside of my apartment. All the information needed to decrypt (mounting the GELI container) is stored at another (secure) place. Key and passphrase are kept at different places though. Even if someone would know of it, he would not be able to decrypt it as some additional insider knowledge would be required also. +The solution is adding another USB drive (2TB) with an encryption container (GELI) and a ZFS pool on it. The GELI encryption requires a secret key and a secret passphrase. I am updating the data to that drive once every 3 months (my calendar is reminding me about it) and afterwards I keep that drive at a secret location outside of my apartment. All the information needed to decrypt (mounting the GELI container) is stored at another (secure) place. Key and passphrase are kept at different places though. Even if someone would know of it, he would not be able to decrypt it as some additional insider knowledge would be required as well. ## Walking one round less I am thinking of buying a second 2TB USB drive and to set it up the same way as the first one. So I could alternate the backups. One drive would be at the secret location, and the other drive would be at home. And these drives would swap location after each cycle. This would give some security about the failure of that drive and I would have to go to the secret location only once (swapping the drives) instead of twice (picking that drive up in order to update the data + bringing it back to the secret location). -E-Mail me your throughts at comments@mx.buetow.org! +E-Mail me your thoughts at comments@mx.buetow.org! diff --git a/content/gemtext/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.gmi b/content/gemtext/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.gmi index e1b0d5f5..b91e7727 100644 --- a/content/gemtext/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.gmi +++ b/content/gemtext/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.gmi @@ -23,8 +23,8 @@ I enhanced the procedure a bit. From now on I am having two external 2TB USB hard drives. Both are setup exactly the same way. To decrease the probability that they will not fail at about the same time both drives are of different brands. One drive is kept at the secret location. The other one is kept at home right next to my HP MicroServer. -Whenever I am updating offsite backup I am doing it to the drive which is kept locally. Afterwards I bring it to the secret location and swap the drives and bring the other one back home. This ensures that I will always have an offiste backup available at a different location than my home - even while updating one copy of it. +Whenever I am updating offsite backup, I am doing it to the drive which is kept locally. Afterwards I bring it to the secret location and swap the drives and bring the other one back home. This ensures that I will always have an offiste backup available at a different location than my home - even while updating one copy of it. Furthermore, I added scrubbing (*zpool scrub...*) to the script. It ensures that the file system is consistent and that there are no bad blocks on the disk and the file system. To increase the reliability I also run a *zfs set copies=2 zroot*. That setting is also synchronized to the offsite ZFS pool. ZFS stores every data block to disk twice now. Yes, it consumes twice as much disk space but it makes it better fault tolerant against hardware errors (e.g. only individual disk sectors going bad). -E-Mail me your throughts at comments@mx.buetow.org! +E-Mail me your thoughts at comments@mx.buetow.org! diff --git a/content/gemtext/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi b/content/gemtext/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi index 61b2b06a..1d481fb3 100644 --- a/content/gemtext/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi +++ b/content/gemtext/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi @@ -6,13 +6,13 @@ ## Background -Finally, I had time to deploy my own authoritative DNS servers (master and slave) for my domains "buetow.org" and "buetow.zone". My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to set your own authoritative DNS servers for your domains. From now I am making use of that option. +Finally, I had time to deploy my own authoritative DNS servers (master and slave) for my domains "buetow.org" and "buetow.zone". My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to set your own authoritative DNS servers for your domains. From now, I am making use of that option. => http://www.schlundtech.de Schlund Technologies ## All FreeBSD Jails -In order to setup my authoritative DNS servers I installed a FreeBSD Jail dedicated for DNS with Puppet on my root machine as follows: +In order to set up my authoritative DNS servers I installed a FreeBSD Jail dedicated for DNS with Puppet on my root machine as follows: ``` include freebsd @@ -49,7 +49,7 @@ class { 'jail': ## PF firewall -Please note that "dns.ian.buetow.org" is just the Jail name of the master DNS server (and "caprica.ian.buetow.org" the name of the Jail for the slave DNS server) and that I am using the DNS names "dns1.buetow.org" (master) and "dns2.buetow.org" (slave) for the actual service names (these are the the DNS servers visible to the public). Please also note that the IPv4 address is an internal one. I have a PF to use NAT and PAT. The DNS ports are being forwarded (TCP and UDP) to that Jail. By default all ports are blocked, so I am adding an exception rule for the IPv6 address as well. These are the PF rules in use: +Please note that "dns.ian.buetow.org" is just the Jail name of the master DNS server (and "caprica.ian.buetow.org" the name of the Jail for the slave DNS server) and that I am using the DNS names "dns1.buetow.org" (master) and "dns2.buetow.org" (slave) for the actual service names (these are the DNS servers visible to the public). Please also note that the IPv4 address is an internal one. I have a PF to use NAT and PAT. The DNS ports are being forwarded (TCP and UDP) to that Jail. By default, all ports are blocked, so I am adding an exception rule for the IPv6 address as well. These are the PF rules in use: ``` % cat /etc/pf.conf @@ -121,7 +121,7 @@ dns2 86400 IN AAAA 2a03:2500:1:6:20:: . ``` -That is my master DNS server. My slave DNS server runs in another Jail on another bare metal machine. Everything is setup similar to the master DNS server. However that server is located in a different DC and in different IP subnets. The only difference is the "named.conf". Its configured to be a slave and that means that the "dynamicdb" gets populated by BIND itself while doing zone transfers from the master. +That is my master DNS server. My slave DNS server runs in another Jail on another bare metal machine. Everything is set up similar to the master DNS server. However, that server is located in a different DC and in different IP subnets. The only difference is the "named.conf". It's configured to be a slave and that means that the "dynamicdb" gets populated by BIND itself while doing zone transfers from the master. ``` paul uranus:~/git/blog/source [4279]% ssh admin@dns2.buetow.org tail -n 11 /usr/local/etc/namedb/named.conf @@ -234,6 +234,6 @@ Whenever I have to change a DNS entry all have to do is: * Wait for Puppet. Puppet will deploy that updated zone file. And it will reload the BIND server. * The BIND server will notify all slave DNS servers (at the moment only one). And it will transfer the new version of the zone. -Thats much more comfortable now than manually clicking at some web UIs at Schlund Technologies. +That's much more comfortable now than manually clicking at some web UIs at Schlund Technologies. -E-Mail me your throughts at comments@mx.buetow.org! +E-Mail me your thoughts at comments@mx.buetow.org! diff --git a/content/gemtext/gemfeed/2016-11-20-methods-in-c.gmi b/content/gemtext/gemfeed/2016-11-20-methods-in-c.gmi index 1f9f2263..45dd3afe 100644 --- a/content/gemtext/gemfeed/2016-11-20-methods-in-c.gmi +++ b/content/gemtext/gemfeed/2016-11-20-methods-in-c.gmi @@ -81,6 +81,6 @@ How to overcome this? You need to take it further... ## Taking it further -If you want to take it further type "Object-Oriented Programming with ANSI-C" into your favourite internet search engine, you will find some crazy stuff. Some go as far as writing a C preprocessor in AWK, which takes some object oriented pseudo-C and transforms it to plain C so that the C compiler can compile it to machine code. This is actually similar to how the C++ language had its origins. +If you want to take it further type "Object-Oriented Programming with ANSI-C" into your favorite internet search engine, you will find some crazy stuff. Some go as far as writing a C preprocessor in AWK, which takes some object oriented pseudo-C and transforms it to plain C so that the C compiler can compile it to machine code. This is actually similar to how the C++ language had its origins. -E-Mail me your throughts at comments@mx.buetow.org! +E-Mail me your thoughts at comments@mx.buetow.org! diff --git a/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi b/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi index d6f0f7ca..15fcd899 100644 --- a/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi +++ b/content/gemtext/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi @@ -14,7 +14,7 @@ Running a large cloud-based service requires monitoring the state of huge number At Mimecast, we run over 10 thousand server boxes. Most of them host multiple microservices and each of them produces log files. Even with the use of time series databases and monitoring systems, raw application logs are still an important source of information when it comes to analysing, debugging, and troubleshooting services. -Every engineer familiar with UNIX or a UNIX-like platform (e.g., Linux) is well aware of tail , a command-line program for displaying a text file content on the terminal which is also especially useful for following application or system log files with tail -f logfile. +Every engineer familiar with UNIX or a UNIX-like platform (e.g., Linux) is well aware of tail, a command-line program for displaying a text file content on the terminal which is also especially useful for following application or system log files with tail -f logfile. Think of DTail as a distributed version of the tail program which is very useful when you have a distributed application running on many servers. DTail is an Open-Source, cross-platform, fairly easy to use, support and maintain log file analysis & statistics gathering tool designed for Engineers and Systems Administrators. It is programmed in Google Go. @@ -28,9 +28,9 @@ Try it out — We would love any feedback. But first, read on… ## Differentiating from log management systems -Why not just use a full-blown log management system? There are various Open-Source and commercial log management solutions available on the market you could choose from (e.g. the ELK stack). Most of them store the logs in a centralised location and are fairly complex to set up and operate. Possibly they are also pretty expensive to operate if you have to buy dedicated hardware (or pay fees to your cloud provider) and have to hire support staff for it. +Why not just use a full-blown log management system? There are various Open-Source and commercial log management solutions available on the market you could choose from (e.g. the ELK stack). Most of them store the logs in a centralized location and are fairly complex to set up and operate. Possibly they are also pretty expensive to operate if you have to buy dedicated hardware (or pay fees to your cloud provider) and have to hire support staff for it. -DTail does not aim to replace any of the log management tools already available but is rather an additional tool crafted especially for ad-hoc debugging and troubleshooting purposes. DTail is cheap to operate as it does not require any dedicated hardware for log storage as it operates directly on the source of the logs. It means that there is a DTail server installed on all server boxes producing logs. This decentralised approach comes with the direct advantages that there is no introduced delay because the logs are not shipped to a central log storage device. The reduced complexity also makes it more robust against outages. You won’t be able to troubleshoot your distributed application very well if the log management infrastructure isn’t working either. +DTail does not aim to replace any of the log management tools already available but is rather an additional tool crafted especially for ad-hoc debugging and troubleshooting purposes. DTail is cheap to operate as it does not require any dedicated hardware for log storage as it operates directly on the source of the logs. It means that there is a DTail server installed on all server boxes producing logs. This decentralized comes with the direct advantages that there is no introduced delay because the logs are not shipped to a central log storage device. The reduced complexity also makes it more robust against outages. You won’t be able to troubleshoot your distributed application very well if the log management infrastructure isn’t working either. => ./2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif DTail sample session animated gif @@ -40,7 +40,7 @@ As a downside, you won’t be able to access any logs with DTail when the server DTail also has a client component that connects to multiple servers concurrently for log files (or any other text files). -The DTail client interacts with a DTail server on port TCP/2222 via SSH protocol and does not interact in any way with the system’s SSH server (e.g., OpenSSH Server) which might be running at port TCP/22 already. As a matter of fact, you don’t need a regular SSH server running for DTail at all. There is no support for interactive login shells at TCP/2222 either, as by design that port can only be used for text data streaming. The SSH protocol is used for the public/private key infrastructure and transport encryption only and DTail implements its own protocol on top of SSH for the features provided. There is no need to setup or buy any additional TLS certificates. The port 2222 can be easily reconfigured if you preferred to use a different one. +The DTail client interacts with a DTail server on port TCP/2222 via SSH protocol and does not interact in any way with the system’s SSH server (e.g., OpenSSH Server) which might be running at port TCP/22 already. As a matter of fact, you don’t need a regular SSH server running for DTail at all. There is no support for interactive login shells at TCP/2222 either, as by design that port can only be used for text data streaming. The SSH protocol is used for the public/private key infrastructure and transport encryption only and DTail implements its own protocol on top of SSH for the features provided. There is no need to set up or buy any additional TLS certificates. The port 2222 can be easily reconfigured if you preferred to use a different one. The DTail server, which is a single static binary, will not fork an external process. This means that all features are implemented in native Go code (exception: Linux ACL support is implemented in C, but it must be enabled explicitly on compile time) and therefore helping to make it robust, secure, efficient, and easy to deploy. A single client, running on a standard Laptop, can connect to thousands of servers concurrently while still maintaining a small resource footprint. @@ -72,7 +72,7 @@ dtail –servers serverlist.txt –files ‘/var/log/*.log’ –regex ‘(?i:er You usually want to specify a regular expression as a client argument. This will mean that responses are pre-filtered for all matching lines on the server-side and thus sending back only the relevant lines to the client. If your logs are growing very rapidly and the regex is not specific enough there might be the chance that your client is not fast enough to keep up processing all of the responses. This could be due to a network bottleneck or just as simple as a slow terminal emulator displaying the log lines on the client-side. -A green 100 in the client output before each log line received from the server always indicates that there were no such problems and 100% of all log lines could be displayed on your terminal (have a look at the animated Gifs in this post). If the percentage falls below 100 it means that some of the channels used by the serves to send data to the client are congested and lines were dropped. In this case, the colour will change from green to red. The user then could decide to run the same query but with a more specific regex. +A green 100 in the client output before each log line received from the server always indicates that there were no such problems and 100% of all log lines could be displayed on your terminal (have a look at the animated Gifs in this post). If the percentage falls below 100 it means that some of the channels used by the servers to send data to the client are congested and lines were dropped. In this case, the color will change from green to red. The user then could decide to run the same query but with a more specific regex. You could also provide a comma-separated list of servers as opposed to a text file. There are many more options you could use. The ones listed here are just the very basic ones. There are more instructions and usage examples on the GitHub page. Also, you can study even more of the available options via the –help switch (some real treasures might be hidden there). @@ -94,7 +94,7 @@ The features listed here are out of the scope of this blog post but are worthwhi There are various features we want to see in the future. -* A spartan mode, not printing out any extra information but the raw remote log files would be a nice feature to have. This will make it easier to post-process the data produced by the DTail client with common UNIX tools. (To some degree this is possible already, just disable the ANSI terminal colour output of the client with -noColors and pipe the output to another program). +* A spartan mode, not printing out any extra information but the raw remote log files would be a nice feature to have. This will make it easier to post-process the data produced by the DTail client with common UNIX tools. (To some degree this is possible already, just disable the ANSI terminal color output of the client with -noColors and pipe the output to another program). * Tempting would be implementing the dgoawk command, a distributed version of the AWK programming language purely implemented in Go, for advanced text data stream processing capabilities. There are 3rd party libraries available implementing AWK in pure Go which could be used. * A more complex change would be the support of federated queries. You can connect to thousands of servers from a single client running on a laptop. But does it scale to 100k of servers? Some of the servers could be used as middleware for connecting to even more servers. * Another aspect is to extend the documentation. Especially the advanced features such as map-reduce query language and how to configure the server-side queries currently do require more documentation. For now, you can read the code, sample config files or just ask the author for that! But this will be certainly addressed in the future. @@ -105,4 +105,4 @@ Mimecast highly encourages you to have a look at DTail and submit an issue for a => https://dtail.dev -E-Mail me your throughts at comments@mx.buetow.org! +E-Mail me your thoughts at comments@mx.buetow.org! diff --git a/content/gemtext/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi b/content/gemtext/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi index 3b67913c..8d4fc0c2 100644 --- a/content/gemtext/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi +++ b/content/gemtext/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi @@ -9,7 +9,7 @@ Have you reached this article already via Gemini? You need a special client for => gemini://buetow.org -If you however still use HTTP then you are just surfing the fallback HTML version of this capsule. In that case I suggest to read on what this is all about :-). +If you however still use HTTP then you are just surfing the fallback HTML version of this capsule. In that case I suggest reading on what this is all about :-). ``` @@ -47,7 +47,7 @@ All what I wanted was to read an interesting article but after a big advertising Around the same time I discovered a relatively new more lightweight protocol named Gemini which does not support all these CPU intensive features like HTML, JavaScript and CSS do. Also, tracking and ads is not supported by the Gemini protocol. -The "downside" is that due to the limited capabilities of the Gemini protocol all sites look very old and spartanic. But that is not really a downside, that is in fact a design choice people made. It is up to the client software how your capsule looks. For example you could use a graphical client with nice font renderings and colours to improve the appearance. Or you could just use a very minimalistic command line black-and-white Gemini client. It's your (the user's) choice. +The "downside" is that due to the limited capabilities of the Gemini protocol all sites look very old and spartan. But that is not really a downside, that is in fact a design choice people made. It is up to the client software how your capsule looks. For example, you could use a graphical client with nice font renderings and colors to improve the appearance. Or you could just use a very minimalistic command line black-and-white Gemini client. It's your (the user's) choice. => ./2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png Screenshot Amfora Gemini terminal client surfing this site @@ -55,7 +55,7 @@ Why is there a need for a new protocol? As the modern web is a superset of Gemin ## My own Gemini capsule -As it is very easy to setup and maintain your own Gemini capsule (Gemini server + content composed via the Gemtext markup language) I decided to create my own. What I really like about Gemini is that I can just use my favourite text editor and get typing. I don't need to worry about the style and design of the presence and I also don't have to test anything in ten different web browsers. I can only focus on the content! As a matter of fact I am using the Vim editor + it's spellchecker + auto word completion functionality to write this. +As it is very easy to set up and maintain your own Gemini capsule (Gemini server + content composed via the Gemtext markup language) I decided to create my own. What I really like about Gemini is that I can just use my favorite text editor and get typing. I don't need to worry about the style and design of the presence and I also don't have to test anything in ten different web browsers. I can only focus on the content! As a matter of fact, I am using the Vim editor + it's spellchecker + auto word completion functionality to write this. ## Advantages summarised @@ -69,10 +69,10 @@ As it is very easy to setup and maintain your own Gemini capsule (Gemini server ## Dive into deep Gemini space -Check out one of the following links for more information about Gemini. For example you will find a FAQ which explains why the protocol is named "Gemini". Many Gemini capsules are dual hosted via Gemini and HTTP(S), so that people new to Gemini can sneak peak the content with a normal web browser. As a matter of fact, some people go as far as tri-hosting all their content via HTTP(S), Gemini and Gopher. +Check out one of the following links for more information about Gemini. For example, you will find a FAQ which explains why the protocol is named "Gemini". Many Gemini capsules are dual hosted via Gemini and HTTP(S), so that people new to Gemini can sneak peek the content with a normal web browser. As a matter of fact, some people go as far as tri-hosting all their content via HTTP(S), Gemini and Gopher. => gemini://gemini.circumlunar.space => https://gemini.circumlunar.space -E-Mail me your throughts at comments@mx.buetow.org! +E-Mail me your thoughts at comments@mx.buetow.org! diff --git a/content/gemtext/index.gmi b/content/gemtext/index.gmi index e88964b9..e181f7f9 100644 --- a/content/gemtext/index.gmi +++ b/content/gemtext/index.gmi @@ -49,7 +49,7 @@ English is not my mother tongue. So please ignore any errors you might encounter ### Posts -I have switched blog software multiple times. I might be backfilling some of the older articles here. So please don't wonder when suddenly very old posts appear here. +I have switched blog software multiple times. I might be back filling some of the older articles here. So please don't wonder when suddenly very old posts appear here. => ./gemfeed/2021-04-24-welcome-to-the-geminispace.gmi 2021-04-24 Welcome to the Geminispace => ./gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi 2021-04-22 DTail - The distributed log tail program diff --git a/content/gemtext/resources.gmi b/content/gemtext/resources.gmi index 55ee89da..7902ca90 100644 --- a/content/gemtext/resources.gmi +++ b/content/gemtext/resources.gmi @@ -8,7 +8,7 @@ This is a list of resources I found useful. I am not an expert in all (but some) The list may not be exhaustive but I will be adding more in the future. I strongly believe that educating yourself further is one of the most important things you should do in order to advance. The lists are in random order and reshuffled every time (via *sort -R*) when updates are made. -You won't find any links on this site because over time the links will break. Please use your favourite search engine when you are interested in one of the resources... +You won't find any links on this site because over time the links will break. Please use your favorite search engine when you are interested in one of the resources... ``` .--. .---. .-. @@ -101,7 +101,7 @@ Many fiction and non-fiction books I read are not listed here. This site mostly I have met many self-taught IT professionals I highly respect. In my own opinion a formal degree does not automatically qualify a person for a certain job. It is more about how you educate yourself further *after* formal education. The pragmatic way of thinking and getting things done do not require a college or university degree. -However, I still believe a degree in Computer Science helps to achieve a good understanding of all the theory involved which you would have never learned about otherwise. Isn't it cool to understand how compiler work under the hood (automata theory) even if in your current position you are not required to hack the compiler? You could apply the same theory for other things also. This was just *one* example. +However, I still believe a degree in Computer Science helps to achieve a good understanding of all the theory involved which you would have never learned about otherwise. Isn't it cool to understand how compiler work under the hood (automata theory) even if in your current position you are not required to hack the compiler? You could apply the same theory for other things too. This was just *one* example. * Student Exchange; I lived 1 year abroad and went to a US high school. * German School Majors (Abitur), focus areas: German and Mathematics @@ -111,6 +111,6 @@ However, I still believe a degree in Computer Science helps to achieve a good un => https://github.com/snonux/vs-sim VS-Sim - The Distributed Systems Simulator -I was one of the last students to whom was handed out an old fashioned German Diploma degree before the University switched to the international Bachelor and Master versions. To give you an idea: The "Diplom-Inform. (FH)" means literally translated "Diploma in Informatics from a University of Applied Sciences (FH: Fachhochschule)". Going after the international student credit score it is settled between a Bachelor of Computer Science and a Master of Computer Science degree. +I was one of the last students to whom was handed out an "old fashioned" German Diploma degree before the University switched to the international Bachelor and Master versions. To give you an idea: The "Diplom-Inform. (FH)" means literally translated "Diploma in Informatics from a University of Applied Sciences (FH: Fachhochschule)". Going after the international student credit score it is settled between a Bachelor of Computer Science and a Master of Computer Science degree. Colleges and Universities are very expensive in many countries. Come to Germany, the first college degree is for free (if you finish within a certain deadline!) |
