La Vita è Bella

Sunday, July 26, 2020

Setting up a Linux laptop on 2020

I'm approaching 3 year anniversary at my current employer, which means my employer issued laptop is up to refresh. I asked for a Linux laptop to replace my Macbook Pro, as I have been disappointed with Apple's negligence to macOS for a long time. Actually if someone told me 3 years ago, when I first joined the company, that I could choose a Linux laptop, I was likely to ask for a Linux laptop back then.

The new laptop issued to me is Dell Precision 5540. It arrived yesterday and I spent most of yesterday and today setting it up. I tried to ask for a Chromebook but was denied. I also tried to ask for a lighter one (like Dell XPS 13) but was told that this is the only choice for Linux laptops for the fleet right now.

The iMac at home I bought at 2013 recently also had its hard drive died. So after I shipped my old Macbook Pro back to my employer, I will be officially without any Mac/macOS devices for the first time after using them for 14+ years. (Although in recent years I already shifted my personal use to ChromeOS as much as possible.)

Although for the past 14+ years I almost always have a Linux machine in the household, I mostly only use it as a home server that's not attached to any keyboard, mouse, nor display most of the time and only ssh into it to use it. So the whole setting up a Linux laptop thing is still kinda new to me.

The new laptop come with Ubuntu pre-installed, but I used that to create a bootable USB stick with Debian testing (bullseye) installer and wiped it with Debian. The setup process is much more smoother than my prior experience (from early 2000s). There were only a few issues I need to correct manually.

Installer with firmwares

One thing I always forgot is that for "modern" hardwares I do need the Debian installer with non-free firmwares. At first I used the netinst image for standard Debian testing, and during the installation process it warned me with 10+ missing firmwares, mostly for the WiFi chip, so I went back to download and burn the correct one again. Even with that one the installation process still warned with 2 missing firmwares, and from the look of them they are both for the WiFi chip. But with the prior experience I went ahead with it and the WiFi worked during the installation process, so I guess the non-free firmwares in the installer image are good enough to make WiFi workable, but probably not to its max capacity?

Bluetooth headsets

I installed Debian testing with KDE. After installation most things just work right out of the box, which is really nice. One thing I noticed not working are bluetooth headsets.

I used the KDE's builtin bluetooth widget to pair my my bluetooth headsets (yes, I have multiple sets). First I tried to pair it with Google Pixel Buds 2. The pairing process failed multiple times, and after it's finally paired, it can never connect to it. Then I tried Bowers & Wilkins PX. The pairing went smoothly, and the laptop can connect to it, but after connected to it I can't seem to be able to get any audio to be played on the headset instead of the built-in speaker.

After some googling, I did the following:

With those steps, I have no problem using my bluetooth headsets as A2DP device for output, but I still cannot use them for input (microphone). Guess I'll just stick with the laptop's builtin microphone for meetings for a while before I figure that out.

Setting Chrome as the default browser from Konsole

Although after installed Chrome and start it for the first time, it offered to set itself as the default browser across the system, apparently KDE has a different setting for default browser across KDE applications (the most important one being Konsole), and Konqueror is still the default browser when I click an URL inside Konsole. I need to change an additional setting inside KDE's System Settings to set Chrome as the default browser from there.

Google Drive client

I wrote a toy/experimental fuse implementation of Google Drive in Go in godrive-fuse, but that's too experimental and immature for really daily use, so I still installed google-drive-ocamlfuse for the daily use one. (EDIT: switched to rclone for daily Google Drive use.)

U2F device

I also purchased a Yubikey 5 nano to leave permanently in one of the USB-A ports. It works great for U2F purpose out of the box, without any additional work. But one problem is that when I touch it without U2F challenge in progress, it start to send random keys to my active window.

The reason is that when there's no U2F challenge in progress, touching it activates OTP mode. I don't really use it for any OTP purpose, so I disabled OTP mode via ykman config usb --disable OTP command (ykman comes from yubikey-manager package).

13:16:03 by fishy - linux - Permanent Link

1 comment - no trackbacks yet - karma: 45 [+/-]

Friday, December 11, 2015

Let's Encrypt!

Thanks to Let's Encrypt, this blog is now serving via https (and https only):

Screenshot of https certificate

In the process of enabling https, I also switched my host from Dreamhost to Google Cloud, and switched to nginx as httpd. (And Dreamhost announced Let's Encrypt support after I made the switch)

The only problem with Let's Encrypt is that the certificate is only valid for 90 days (ok no support of wildcard domain might also be a problem, but I don't feel it), which means I need to renew my certificates often. Luckily that can be done via a monthly (or bi-monthly) cron job.

This is the code snippet of my nginx configuration to make both https only and Let's Encrypt ACME verification work:

server {
        listen 80;
        listen [::]:80;


        location /.well-known/acme-challenge/ {
                alias /var/www/challenges/.well-known/acme-challenge/;
                try_files $uri =404;

        location / {
                return 301 https://$host$request_uri;

And this is the script to be put into crontab (I use the official client from Debian experimental):

/usr/bin/letsencrypt certonly --renew-by-default --webroot -w /var/www/challenges -d -d -d

That's it! Please consider donating to Let's Encrypt!

tags: , , , ,

23:26:36 by fishy - linux - Permanent Link

no comments yet - no trackbacks yet - karma: 50 [+/-]

Wednesday, February 25, 2009

Note: dhcpd configuration

Although I've got 802.11n working on my Asus Eee Box, copy big files between Eee Box and my lap-top over WiFi is slow (as my Time Capsule is far away). So I use a cable for file copying, that's gigabit!

But I have to set my lap-top's ethernet to use DHCP in office, to avoid 2 network configurations on my lap-top, I need my Eee Box to act as a dhcpd, that can automatically assign an IP to my lap-top, but don't harm the router/nameserver configurations on my lap-top, and the existing DHCP in the WiFi network.

I aptituded the dhcp3-server package, and look into the default configuration file, got this code:

subnet netmask {

And it works!

tags: , ,

20:07:09 by fishy - linux - Permanent Link

1 comment - no trackbacks yet - karma: 53 [+/-]

Monday, January 05, 2009

The high memory usage of Squid with external acl

We configured some Squid 2.6 servers that use external_acl_type to use some headers sent by client for access control. And the authentication isn't username/password routine, but use some tag to calculate hash. When running, the memory used by Squid just keep increasing from time to time, just like it have a memory leak. We tried to disable the acl on some server, and these servers runs just fine.

As external acl run in individual process, even if the acl program have memory leak, the memory used by the squid process shouldn't be growing.

We tried many ways to figure out the problem, but all fails. Finally someone noticed that in the external_acl_type documentation, there's a parameter named "cache", with this description:

result cache size, 0 is unbounded (default)

"unbounded"! So this is the problem. For username/password routine, cache is useful. the next time some user with the same username/password comes, Squid can get the result from cache without communicate with acl program. But for our authentication method, as the headers used to calculate hash is differ from every request, cache is totally useless.

I really hope that "0" means no cache and "-1" means unbounded. But anyway, set "cache" to 1 can do the trick. Now the Squids don't have memory problems anymore, although cache replace will slow them down a bit.

tags: , , , ,

18:09:03 by fishy - linux - Permanent Link

no comments yet - 1 trackback - karma: 42 [+/-]

Saturday, November 08, 2008

Got 802.11n working on Asus Eee Box!

In my last blog, I use ndiswrapper for wireless driver and it can only use 802.11g, but not 802.11n. But today I've got the solution!

According to this article on EeeUser forum, the rt2860 chipset released the source code for Linux driver! Download them from the official website, and build it. You will need kernel header package to build the driver.

After successfully build, use "modprobe rt2860sta" to install this module, and you may also add the line "rt2860sta" to your "/etc/mmodules" file to load it automatically every time (but seems that modprobe have done this, you may don't need this step).

Now here's a problem: seems that wpa_supplicant didn't support this driver. So you need to set wireless parameters by iwpriv manually. And the biggest problem is WPAPSK, you can't just input your passphrase to do it. Luckily there's a webpage that can calculate the WPAPSK for us. I'm using WPA2 and it works. I'm not sure about WPA. But WPA is broken! Why don't you move to WPA2? ;)

Save the below script to "/etc/" and give it execute privilege:

w="iwpriv $iface"

total_start=$(date +%s)

init_start=$(date +%s)
echo -en "iwpriv config..."
$w set NetworkType=Infra
$w set AuthMode=WPA2PSK
$w set EncrypType=AES
$w set SSID=Your SSID
# get WPAPSK from
echo "done"
init_end=$(date +%s)

assoc_start=$(date +%s)
echo -n "Associating..."
for ((i=0; $i < $assoc_loop; i++)); do
        if [ "$(iwconfig ra0 2>/dev/null | head -1 | cut -f2 -d: | cut -f1 -d" ")" == "\"\"" ]; then
                if [[ $(( ($i+1) % $assoc_report )) == 0 ]]; then
                        echo -n .
                echo done
        sleep 1

if [[ $assoc != 1 ]]; then
        echo failed
        exit 1
assoc_end=$(date +%s)

total_end=$(date +%s)

echo -e "Time spent (sesconds)\n\tinit: $init_time\n\tassociation: $assoc_time\n\tTotal: $total_time"

Credit to a1l0a2k9, the above script is also from EeeUser forum, but I removed the DHCP part and modprobe part. If you are using DHCP, then you may need the DHCP part and modprobe part from the original script.

Now the "/etc/network/interfaces" part, add the following lines for the ra0 interface:

iface ra0 inet static
up /etc/
auto ra0

(for DHCP users: change "static" to "manual" and remove the "address", "netmask" and "gateway" lines.)

And now, "ifup ra0", then you're done!

tags: , , , , , , , ,

11:01:14 by fishy - linux - Permanent Link

no comments yet - no trackbacks yet - karma: 54 [+/-]

Thursday, November 06, 2008

Debian Lenny on Asus Eee Box

UPDATE: now we have 802.11n!

My old home server is dying these days, so I bought a new Asus Eee Box B202 to replace it. It uses Intel Atom N270 CPU, 1G memory, 80G harddisk, 10/100/1000 Ethernet and 802.11n wireless.

The first thing I do on it is install Debian Lenny, my favourite system for server.

Preparing USB flash for net install

As it didn't come with a cd-rom, I choose USB flash. I use the SD card from my camera and a USB card reader to combine a USB flash, and it can be used to boot successfully.

I prepared the USB flash according to the Debian Lenny documentation, but meet some problems:

  1. Google for hd-media returned the hd-media link for Sarge as the first result, I used the boot.img.gz from Sarge and lenny-businesscard iso but the iso can't be found by the installer, so the installer (boot.img.gz) and the iso must match.
  2. The hd-media from Lenny and the Lenny beta2 iso didn't match, neither. The installer can recognize the iso, but complain about mismatched kernel version, and prompt you that it need network update. But the installer from boot.img.gz didn't come with ethernet driver, so it will fail and can't continue.

So finally I have to use "the flexible way" and net install. I use the initrd.gz that have ethernet driver, and the vmlinuz from Lenny hd-media. None of the iso is needed (and you can't use them), all packages will be downloaded from one of the Debian mirrors.

After prepared the USB flash, DON'T FORGET to lock the write protection lock before boot. It will save your life later.

Install Debian

Boot from the prepared USB flash, and it contains only GRUB CLI, so you need to boot the installer manually:

root (hd0,0)
kernel /vmlinuz
initrd /initrd.gz

Now you have a Debian Installer that can drive your ethernet card, so you're ready to install.

Install steps are normal, nothing more to say until the grub-install step.

grub-install failure

On the grub-install step, it will complain that grub-install (hd0) failed. Why? cause (hd0) is your USB flash and (hd1) is your harddisk! That's why lock the write-protection lock is important, or otherwise it may succeed without write your harddisk mbr. That's really stupid. Manually install grub on (hd1) and it will continue.

And the installation completes.

The X problem

After installation, you may find that your X didn't work. This is because it uses a Intel Graphic chipset for lap-top, but it's not lap-top. Ubuntu wiki have the solution, and it works.

Wireless driver

UPDATE: follow my next blog article for official driver and 802.11n!

After installation, the first important thing to do is to drive the wireless card. It uses AzureWare card which uses a rt2860 chipset. And luckily, ndiswrapper can do it.

Follow the instructions on Debian wiki to install ndiswrapper. The Windows driver is on the CD (you have another computer to read the CD and copy the driver to the Eee Box, do you?), I used the WIN2KXP one. After install ndiswrapper, it works.

But there are some problem in the /etc/network/interfaces, if you use WPA (I didn't get thie problem when using WEP):

auto wlan0
iface wlan0 inet static
wpa-conf /etc/wpa_supplicant.conf

The configure above can be used to connect router (that I can see it from router admin), but the IP can't be reached by another computer in the subnet. But if you execute an extra:

# ifconfig wlan0

Then it will be OK. I don't know why but if I move the "auto wlan0" line after the "wpa-conf" line, it works fine. Maybe it must wait wpa-supplicant to do something first?

Another problem is that it can only use 802.11g wireless, if anyone knows how to drive it to 802.11n, please tell me :)

The end and photos

Finally, I have a new home server now.

Asus Eee Box

Asus Eee Box

Asus Eee Box

Asus Eee Box

tags: , , , , ,

14:31:00 by fishy - linux - Permanent Link

5 comments - 2 trackbacks - karma: 62 [+/-]

Monday, November 19, 2007

AFP versus SMB

I have a Linux file server in my home running Debian Lenny, and I always use SMB for file sharing, it have a very very bad performance. Today I suddenly remember Apple have an AFP protocol, so gave it a try.

I use "apt-cache search afp" to find out that there's a package named "netatalk" can provide AFP file sharing, so install it. But I can only login use guest account, not my system user, from Leopard.

I googled it and found that the problem is: on the Debian side, as a license issue, the Debian package didn't come with SSL support; on the Leopard side, it didn't allow you exchange your password with AFP server without SSL. So the solution is build netatalk yourself, with SSL.

The building steps are described on this blog, and I also disabled atalkd as the author suggested, it caused netatalk to start-up much faster than before.

So finally I got a AFP server for my Mac (compare this icon to the famous BSOD icon for SMB servers in Leopard :P):

AFP server icon in Leopard

And as expected, AFP is much much faster than SMB, here's the write test:

For AFP:

fishy@McManaman:~$ dd if=/dev/zero of=/Volumes/Home\ Directory/foo
^C57345+0 records in
57345+0 records out
29360640 bytes (29 MB) copied, 11.0833 s, 2.6 MB/s

And for SMB:

fishy@McManaman:~$ dd if=/dev/zero of=/Volumes/fishy/bar
^C4235+0 records in
4235+0 records out
2168320 bytes (2.2 MB) copied, 10.6889 s, 203 kB/s

I'm impressed!

tags: , , , , , ,

22:38:32 by fishy - linux - Permanent Link

4 comments - no trackbacks yet - karma: 30 [+/-]

Wednesday, August 01, 2007

The reversed diff

We use diff to find out the different lines in 2 files, but sometimes we also need to find out the same lines in 2 files. So we need the "reversed diff"

And this command can be used as the reversed diff:

cat file1 file2 | sort | uniq -d

tags: , , , ,

17:55:56 by fishy - linux - Permanent Link

no comments yet - no trackbacks yet - karma: 57 [+/-]

Wednesday, March 21, 2007

Bash script: batch resize your photos

If you toke some photos by your camera, and want to post them to somewhere (for example, I want to post the photo of my Treo 650 because I'm going to sell it), you may need to batch resize your photos.

This bash script shows how to uses ImageMagick to batch resize your photos:

1 #!/bin/sh
3 for file in *.JPG; do
4         convert -resize 1024x768 $file ${file%.JPG}_resize.jpg
5 done

tags: , , , ,

21:28:36 by fishy - linux - Permanent Link

1 comment - no trackbacks yet - karma: 29 [+/-]

Tuesday, March 20, 2007

Note: set proxy for wget

wget -Y -e "http_proxy=host:port" url

"How to set proxy for wget?" I've been asked this question for many times, but it seems didn't appears in the "-h" output nor man page, so I always forgot it.

That's why I'm making a note here :)

tags: , , ,

00:42:01 by fishy - linux - Permanent Link

4 comments - no trackbacks yet - karma: 37 [+/-]

May the Force be with you. RAmen