Wednesday, October 28, 2015

DHCP Failover on RHEL 7

enter image description here

As always, i am not the authority on this subject; however, I have successfully added “failover” to our existing DHCP server in which the OS had been replaced several times while simply copying the dhcpd.conf over each time.

Configuring a failover DHCP is essentially not difficult. However, if you are in an “Enterprise” or “Corporate” environment (i.e. multiple subnets), then your router will require an additional “ip-helper” for each subnet. You or your network engineer will need to perform this task for the following system to work. In our case we simply added a secondary ip helper-address <IP> to each subnet (VLAN) in our hardware router.

Prerequisites: 0) properly configured router. 1) dhcpd running and configured properly. 2) EPEL repo installed. 3) ssh-key passwordless logins configured between the two DHCP servers. 4) Time is synchronized on servers (via ntpd or vm-tools’ options)

I reviewed the following sources for this process:
http://blog.whatgeek.com.pt/2012/03/dhcp-failover-load-balancing-and-synchronization-centos-6/
https://kb.isc.org/article/AA-00502/0/A-Basic-Guide-to-Configuring-DHCP-Failover.html
https://www.howtoforge.com/how-to-set-up-dhcp-failover-on-centos5.1
http://www.cyberciti.biz/faq/linux-inotify-examples-to-replicate-directories/
http://linux.die.net/man/5/incrontab
http://www.lithodyne.net/docs/dhcp/dhcp-5.html

The first link above had the best idea of creating include files for the configuration. This allowed me to automate copying the dhcpd.conf file to the secondary server upon any changes.

Obviously, your IP scheme will be much different, please adjust accordingly. Also, this write-up may in-fact not apply to all configurations out there – You may consider this post just another resource for your research.

Let’s begin…

In addition to our existing RHEL 7 server running dhcpd, I have configured a second machine running the same. For now, the secondary dhcpd service is stopped.

In my case, I edited the primary server’s /etc/dhcp/dhcpd.conf to contain
include "/etc/dhcp/dhcpd.failover";
and to contain at least one pool declaration. In my case, because i was still testing things, in an existing subnet I commented out the existing range statement and added the pool statement just below with the same range and the required failover statement:
 subnet 10.20.0.0 netmask 255.255.0.0 {
    option broadcast-address 10.20.255.255;
    option routers 10.20.1.1;
    #range 10.20.20.1 10.20.22.254;
    pool {
        range 10.20.20.1 10.20.22.254;
        failover peer "dhcpfailover";
        }
    }
Again, note that at least one pool is required. I learned the hard way that without it, the dhcpd service will not start, leaving my network without a server for several minutes. If you are unsure where to put the include statement, just put it after your initial options and just before you first subnet.

You can either add a pool statement to each of your subnets at this point, or just do one for now for testing purposes. Each pool requires a failover peer ... statement for failover to actually work.

You may test your dhcp.conf file with the command dhcp -t -cf /etc/dhcp/dhcp.conf.

At this point, you can copy your primary /etc/dhcp/dhcpd.conf to your secondary server. We will ultimately script a mirroring process.  Just to re-iterate, this /etc/dhcp/dhcpd.conf contains include "/etc/dhcp/dhcpd.failover"; and one pool. This .conf file is copied identically to the secondary dhcp server.

Now, one of the most important parts is for the contents of the include files.  Each dhcp server will have a differing /etc/dhcp/dhcpd.failover file.

Create your primary server’s /etc/dhcp/dhcpd.failover to contain
# Failover specific configurations
failover peer "dhcpfailover" {
primary;
address 10.10.0.100;
port 647;
peer address 10.10.0.101;
peer port 647;
max-response-delay 60;
max-unacked-updates 10;
mclt 600;
split 128; #128 is balanced; use 255 if primary is 100% responsible until failure.
load balance max seconds 3;
}
and the secondary server’s /etc/dhcp/dhcpd.failover to contain
# Failover specific configurations
failover peer "dhcpfailover" {
secondary;
address 10.10.0.101;
port 647;
peer address 10.10.0.100;
peer port 647;
max-response-delay 60;
max-unacked-updates 10;
load balance max seconds 3;
}
obviously, where my primary DHCP is 10.10.0.100 and my secondary DHCP server IP is 10.10.0.101 ; Change yours accordingly.

You will also have to open the firewall to TCP port 647 on each server. In my case I chose to allow only from the specified IP sources.

At this point, you may start your secondary server’s dhcpd service with systemctl start dhcpd. If it starts properly without error, then it is safe to restart your primary server’s dhcpd service with systemctl restart dhcpd. You should test that it’s running properly at this point, and if not fix it promptly or reverse your changes and review and try again. You may also use commands such as journalctl -xn30 or systemctl -n30 status dhcpd to locate faults. You should also enable the dhcp service for auto-start with systemctl enable dhcpd.

In this system, your DHCP changes should only be applied to the primary server. The secondary server only exists for failover purposes.

Changes to the primary DHCP server do not by-default mirror to the secondary server, so we will automate this. For this process, we'll use root ssh; I have chosen to allow root ssh via ssh-key so that it can be automated with scripts. Of course I have firewalled my ssh ports to allow only certain certain IP ranges.

** Before proceeding, please note: a comment by "ZsZs" recommended replacing my incrond usage with a systemd built-in feature. I concur but have NOT yet tried it. Please refer to https://wiki.archlinux.org/index.php/rsync#Automated_backup_with_systemd_and_inotify for a better alternative. ...continuing...

I chose to utilize incrond to simplify this mirroring process. incrond utilizes the inotify-tools to watch a file (or directory) for changes to execute a specified command. This tool is not in the default RHEL repositories. To install it you will need the “Extra Packages for Enterprise Linux” (EPEL) which is quite easy to install.
For RHEL 7, installation is as follows:
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -i epel-release-latest-7.noarch.rpm
Afterward, install and enable incrond as follows:
yum -y install inotify-tools incron
systemctl enable incrond
First, let’s write a script to copy dhcp.conf to the secondary server and restart it’s service. Create a file /root/scripts/update-failover-server.sh to contain: (due to potential issues, use full command paths)
#!/bin/bash
/usr/bin/scp /etc/dhcp/dhcpd.conf root@10.10.0.101:/etc/dhcp/dhcpd.conf
/usr/bin/ssh root@10.10.0.101 '/usr/bin/systemctl restart dhcpd'
/usr/bin/systemctl restart incrond #CRITICAL ISSUE; one-time trigger and subsequent fail work-around
and be sure to mark it executable (chmod +x). Again, these are my IP’s, yours will vary. Most importantly, note that I have already enabled passwordless login between the servers with ssh-keys. This automation will NOT work without such. You may in fact want to to test your script’s success by running it manually first.

We can now configure a “watch” for any changes to the dhcpd.conf file. Use the command EDITOR=nano incrontab -e to edit the incron-file with syntax FILE TRIGGERLIST COMMAND [OPTION] (refer to the links referenced above):
/etc/dhcp/dhcpd.conf IN_MODIFY,IN_ATTRIB,IN_CREATE /root/scripts/update-failover-server.sh
Here, I’m trying to cover any modification to dhcpd.conf. Editors vs. Webmin modify the file differently, so this should cover both instances.

We can now start the incrond services with the command systemctl start incrond.

At this point, both servers should be running and able to serve IP addresses. You should verify such.

Now, you may test that any changes to your primary dhcpd.conf propagate to the secondary server. Go ahead and modify your primary /etc/dhcp/dhcpd.conf by your preferred method and analyze what happens.

As you find that everything is a success, you may add pool statements to each subnet while moving the range statements within the pool.

---
As Always, Good Luck! You can thank me with bitcoin.    


95% Written with StackEdit.

Friday, September 4, 2015

vCenter Server Appliance (vCSA) 5.5 SSL certificate replacement and root password reset the hard and/or easy way. (or The vCSA is Dead, Long Live the vCSA)








It all started because my SSL certificate was expired and with recent updates to Chrome and Firefox we could not login. I found this perfect article (http://www.virtuallyghetto.com/2013/04/automating-ssl-certificate-regeneration.html) to regenerate the certificate via commandline, but i was locked out of the root account. I thought, as many do, that i had forgotten the root password, but it turns out that with v5.5, root passwords expire in 90 days and if you don't set an SMTP server and email address, you will never be notified. Furthermore after some time the account will be locked.

On the vCSA console you have an option to Login. I don't really know why, but somehow after failing some logins or semi-resetting the account, i found that if i mistyped the password 3 times, i would actually get the root prompt. Now if you cannot get the root prompt, luckily you can use this excellent method (http://www.virtuallyghetto.com/2013/09/how-to-recover-vcsa-55-from-expired.html) to get to the files also. If you do use the link's method, please note that you will need to mount/remount with read-write access which is not mentioned. hint: mount -o remount,rw /mnt

So in the following commands, you will see [/mnt]. What i mean by this is that if you use the link's method, then you need /mnt/path ; however, if were able to get the vCSA's root prompt, then simply use the /path (excluding /mnt)

Of course you should bakup any files you plan to change!

We will use vi because it's the editor built into the vCSA. Remember in vi you can i or a to insert or append to get into typing mode. Afterward, <esc>wq! to force save & quit or <esc>q! to abort and quit.

Firstly, i found that the password policy required a new, previously unused, password that had to meet complexity requirements. This sucked tremendously considering i really really wanted to re-use the password. If you wish to do the same try this, if not skip it.

change root password policy with vi [/mnt]/etc/pam.d/common-password

change
password        requisite       pam_cracklib.so dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1 minlen=8 difok=4 retry=3
password        required        pam_pwhistory.so enforce_for_root remember=4 retry=3
to
password        requisite       pam_cracklib.so dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1 minlen=8 difok=0 retry=0
password        required        pam_pwhistory.so enforce_for_root remember=0 retry=3
The above should set that the password does not need to have to be significantly different from prior passwords and not to spam the prompt requesting retries.

An alternative way might be to purge the password history file with rm [/mnt]/etc/security/opasswd which of course i did.

If you can get to the vCSA's root prompt, then you can change the root password with passwd

However, if you cannot, then potentially you can reuse your old password by reinstantiating the password hash from a backup shadow.* file. Revisit the above mentioned article (http://www.virtuallyghetto.com/2013/09/how-to-recover-vcsa-55-from-expired.html) as it explains the shadowfile.

Thusly, I was successful with restoring the default password of vmware by finding that [/mnt]/etc/shadow.UPDATE contained its hash. I copied the hash (installation specific i'm sure) into the existing [/mnt]/etc/shadow's root line and made sure the 5th field was nothing (::). it looked like such:
root:$2y$10$Gye6636Oxy/2yK01.7MW0ud8pSE90cEYr92kLSwDvJmULjmTPnu0O:16581:0:90:7:::

Once I had accomplished all this, i rebooted and was able to login with vmware. I reset my password with passwd at the root prompt. It complained it was too simple a password, but accepted it none-the-less.

Note: Through all this, I found that the actual client login Administrator@vSphere.local had never changed changed, so don't expect it to be the new one you just changed.

Futhermore, I followed this very fitting article (http://www.virtuallyghetto.com/2013/09/administrator-password-expiration-in.html) to completely disable root password expiration with chage -M -1 -E -1 root.

If you use VMWare Update manager (VUM), you may need to remmediate it's connection with http://kb.vmware.com/kb/2048210.

In addition to the resources linked above, the following were referenced during my adventure-less adventure.
My other recent posts also relate to this issue due to Mozilla, Google and Microsoft weak SSL deprecation policies.



---
As Always, Good Luck! You can thank me with bitcoin.    


Wednesday, September 2, 2015

VMWare vCenter vSphere Web Client Chrome 45 ERR_SSL_WEAK_SERVER_EPHEMERAL_DH_KEY


VMWare vCenter vSphere Web Client + Chrome 45
Server has a weak ephemeral Diffie-Hellman public key
ERR_SSL_WEAK_SERVER_EPHEMERAL_DH_KEY
a.k.a Forward Secrecy

I patched this together rather quickly, but i think it's all here!  Today we had issue with Chrome 45 failing to connect to VMWare 5.1 vCenter vSphere Web Server (vSphere Web Client).  This is how I fixed it. (Also seems to work in Firefox.)

create new self-signed certificate (in linux):
openssl genrsa 2048 > rui.key
openssl req -new -key rui.key > rui.csr
#openssl x509 -in rui.cer -out rui.crt
openssl x509 -req -days 1825 -in rui.csr -signkey rui.key -out rui.crt
openssl pkcs12 -export -in rui.crt -inkey rui.key -name rui -passout pass:testpassword -out rui.pfx
you must use testpassword if you retain the default tomcat keystorePass setting in the xml mentioned below.

backup and replace files in C:\Program Files\VMware\Infrastructure\vSphereWebClient\DMServer\config\ssl\ with the files just created.  *** my other vCenter did not have this folder. It was C:\Program Files\VMware\Infrastructure\vSphereWebClient\logbrowser\conf\ instead. (upgrade vs. fresh install?)

edit C:\Program Files\VMware\Infrastructure\vSphereWebClient\server\config\tomcat-server.xml
was
        <Connector port="9443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="500" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" ciphers="SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_DH_RSA_WITH_AES_256_CBC_SHA, TLS_DH_DSS_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_DH_RSA_WITH_AES_128_CBC_SHA, TLS_DH_DSS_WITH_AES_128_CBC_SHA" keystoreFile="C:\ProgramData\vmware\vSphere Web Client\ssl\rui.pfx" keystorePass="testpassword" keystoreType="PKCS12"></Connector>
changed to (removed RC4 and DHE-only ciphers)
        <Connector port="9443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="500" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" ciphers="TLS_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_DH_RSA_WITH_AES_256_CBC_SHA, TLS_DH_DSS_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_DH_RSA_WITH_AES_128_CBC_SHA, TLS_DH_DSS_WITH_AES_128_CBC_SHA" keystoreFile="C:\ProgramData\vmware\vSphere Web Client\ssl\rui.pfx" keystorePass="testpassword" keystoreType="PKCS12"></Connector>
restart both vspherewebclientsvc and vctomcat services. Be patient, it will take a few minutes for the services to be ready to serve the web-pages correctly.


---
As Always, Good Luck! If I saved your ass, you can thank me with bitcoin.    


Friday, August 28, 2015

https SSL cipher remediation for webservers 2015


I don’t know jack! I am NOT a security professional by trade, but please at least be aware that simply installing an SSL certificate on your server does NOT make it secure.

Thanks to Qualys SSL Labs (https://www.ssllabs.com/ssltest/), testing your server for SSL security is dead simple. I recommend every public site you manage to be tested immediately!

Once you know your status, here are some invaluable information resources you will need for remediation:


Setup your [Windows] IIS for SSL Perfect Forward Secrecy and TLS 1.2 : https://www.hass.de/content/setup-your-iis-ssl-perfect-forward-secrecy-and-tls-12

Additionally I had one server that used stunnel (https://www.stunnel.org) on Windows. I found the following was good settings for C:\Program Files (x86)\stunnel\stunnel.conf:
sslVersion = all
options = NO_SSLv2
options = NO_SSLv3

ciphers = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4

Again, I am NOT a security expert, so please do not blindly reconfigure your settings without fully understanding what you are doing. I do not think my advice is wrong, but there absolutely might be better settings available.

Here is a good Mozilla resource for Server Side TLS (https://wiki.mozilla.org/Security/Server_Side_TLS) including a link to Mozilla SSL Configuration Generator (https://mozilla.github.io/server-side-tls/ssl-config-generator/)

As Always, Good Luck! You can thank me with bitcoin.    


---

Wednesday, June 10, 2015

Mikrotik RouterBoard RB2011UiAS-2HnD-IN


I’m the proud owner of a highly configurable and capable Small Office/Home Office Router, namely the Mikrotik/RouterBoard RB2011UiAS-2HnD-IN. But don’t let the “SOHO” description minimize it’s capability. Running a custum Linux “RouterOS”, it is professional and powerful. If Mikrotik sounds familiar, you may have previously come across their popular software “The Dude “.


Previously, I used a DD-WRT-flashed Cisco (Linksys) E1000.v2 but with only four 100Mbit LAN ports it lacked in speed, capability, and sometimes stability. I was mostly happy with this refurbished device at the cost of $20 from Big Lots. However, lately it had started flaking-out with DNS for unknown reasons. Furthermore, the wireless never reached to the opposite side of my home.

When I signed up for Internet with my ISP rates were touted as 30Mbps down and 3Mbps up. Since then, they’ve upgraded to 60Mbps down and 4Mbps up. With the Cisco and DD-WRT, my best rates were just under 18Mbps down and 1.6 Mbps up. I could never figure out what the problem was and always assumed I was being cheated by my ISP. In amazement, after configuring the Mikrotik, speed tests immediately reported 60+Mbps down and 4.4Mbps up.

The fun part of the Mikrotik is configuring it – I am a tech-geek after all. If you are not technically ready for such, you may want to just go to your local Walmart and get a basic home router because the RouterOS is not immediately user-intuitive if you are without some networking background. You can see a demo UI here, but don’t let it scare you – luckily the “Quick Set” menu-item will get the device up and running fully. All the other advanced options are just that, optional.

The one caveat is that it is shipped without full documentation. The only instruction given is a micro-printed three-page quick setup guide. It read that port 1 was set DHCP for WAN, the gateway address was 192.168.88.1, and The user/pass was defaulted to admin/[blank] – that was all i needed. It also referred to the documentation online at http://wiki.mikrotik.com/wiki/Category:Manual. This is where you get the detailed instruction – again a networking background helps. Mikrotik also has a community forum that will gladly assist with basic to advanced needs. Of course, I also found Youtube to be indispensable for some things like port-forwarding, VPN, and even port-knocking. I also chose to use an ACL list to only allow specific MAC addresses wireless access. Bittorrent was blocked by default, but a youtube video showed me exactly how to configure the firewall.

Most home routers have no more than four or five 100Mbps ports. The RB2011UiAS-2HnD-IN has five Gigabit ports and five 100 Mbps ports. (Gigabit port #1 will be dedicated to WAN/Internet however.) It also has an SFP port for fiber-optics. Having Gigabit ports certainly helps with file-transfers to such things as NAS Backup devices such as my aging Synology 410j.

I even re-purposed my Cisco E1000 as a bridged-client so that i could finally get a decent signal to my smart TV for uninterrupted Youtube.

I would absolutely recommend this to any computer geek. You can view a HiDef unboxing here and another here. Note that these videos have the European plugs; whereas, the U.S. plug is proper as expected. Also, the connector is external as expected like the first video. I am unsure why the second video’s connector is internal, but i shared it because you can view the router’s LCD screen and internal hardware.


Thank you and good luck!







---

Tuesday, May 26, 2015

Automated Hamachi Reset BASH Script

On occasion, Hamachi may be in a failed state on your always-on device. Alternately, maybe when awaking from sleep-mode, Hamachi may not be functional but still reporting online.

Thusly, it might be very useful for a cronjob to check the state and reset. This might especially be useful on your remote machines that you need connectivity to. For example maybe a remote machine that is in sleep mode, but WOL is possible from another remotely-accessible LAN device.

I have written a BASH script for checking Hamachi and forcing re-login if necessary. Maybe this script is not all-encompassing, but it’s a good start.

Let’s assume you have 2 or more client IP’s and also assume they are in an always-on state. (If for example you have 4 clients, but only 3 remain always-on, you will only use the 3 that you expect to be on.)

Below is my script which you will need to edit (IP addresses and hamachi network name). It will ping each hamachi neighbor and only reset if ALL are unreachable.  Alternatively it will go-online-only if failures>0 and failures<neighbors.

This script uses bash installed from Entware-NG or Optware or Optware-NG. You will have to heavily modify this script if you prefer the built-in ash shell that is default with Synology.
You should edit, save, and mark the script executable.  (e.g. chmod +x ~/scripts/check-hamachi.sh).

Since the script executes sudo /etc/init.d/logmein-hamachi start , you must add the command to your sudoers file:

Run the command EDITOR=nano sudo visudo and AT THE VERY LAST LINE (or elsewhere if you know what you are doing) add:
 username ALL=(root) /etc/init.d/logmein-hamachi , where username is your account.

Lastly cronjob (crontab -e) it to every 5 minutes under your own user account. (e.g.:*/5 * * * * ~/scripts/check-hamachi.sh)

As always, Good luck!
---

Thursday, May 21, 2015

Practically "Free" Landline service with ObiHai and Google voice (VOIP)



I have purged myself from paying for a traditional landline.  Furthermore, I can hardly believe that's it's been this long.  Since November of 2013, I have had a practically free landline by using Google Voice via an OBIHAI.com device over my Internet Service.  (Yes, I do pay for an ISP).  Google Voice calling is free nation-wide and if you need to call overseas, they offer extremely low international rates.

Had I retained my traditional phone service at $27 per month, I'd have paid $486 plus fees plus taxes for the last 18 months. However, I have only paid $14.40 for my optional e911 from Anveo.com.

Specifically I purchased the OBi200 from Amazon which is now only $48:
-------

-------


I then acquired a Google Voice phone number.  (Technically I ported my existing number for a cost, but that is optional.  If you find you cannot port your landline number, you can circumvent the system by porting it to a mobile device SIM, then to Google -- This was a $30 and 5 day process for me -- again this is optional as I preferred to retain my existing number.)

OBIHai Officially support Google Voice.

My family has utilized "FREE" VOIP phone service for the last 18 months and it is of the same quality as any phone service from a traditional provider.  It works perfectly with my multi-phone cordless base-set much like this one.  Additionally, with Google Voice, I am able to receive voicemail recordings and missed call notifications direct to my Gmail.  I can also block numbers easily via web interface and use other awesome features such as having Calling Circles for filtering calls direct to voicemail.

Most telephony service providers are now only offer VOIP anyway.  With Google and OBiHai you can now remove the middle-man and have the same service without the $20-30 monthly costs associated with it.

Google Voice does NOT provide 911 service; however, Anveo.com provides me with e911 for a mere 80 cents per month.

Please consider using my Amazon affiliate Ad for the OBi200 above and optionally my Anveo.com referral code 7384061.


For a little more detail on the ObiHai device, see this nice article: http://www.geekzone.co.nz/sbiddle/8736

Thank you kindly and good luck!
---

Thursday, May 14, 2015

password checkers


You really should have complexity in your passwords.  Here are couple good password checkers to test yours: (It's suggested not to test your actual passwords, but something highly similar instead)
Pass (a.k.a password-store) is quite an interesting and seemingly effective, albeit not immediately user-friendly, password manager that is open source and uses existing technology like GPG and Git.  For more information visit http://www.passwordstore.org/.  Also note that it's community has created several interfaces such as the firefox addon passff.

Wednesday, May 13, 2015

Install Pinta 1.6 in Debian Jessie 8.0

Paint.NET is a very nice PaintShop Pro alternative for Windows. And luckily, Pinta is fantastic Linux alternative of Paint.NET.

I had Pinta installed in my Debian 8.0 (Jessie) and it was stuck at version 1.5.

I actually use Pinta’s Ubuntu repo to install it and as it turns out I needed to change the repo to reference the “Trusty” version to upgrade to Pinta 1.6.

When I changed as such, the upgrade reported depedency problems with mono, specifically it seems mono v4 must have recently came out.

To resolve my issue i had to first purge pinta and mono-runtime, then reinstall them.
sudo aptitude purge pinta mono-runtime

edit your /etc/apt/sources.list to include:
#Pinta
deb http://ppa.launchpad.net/pinta-maintainers/pinta-stable/ubuntu trusty main 
deb-src http://ppa.launchpad.net/pinta-maintainers/pinta-stable/ubuntu trusty main 

Then update and install:
sudo aptitude update
sudo aptitude install mono-runtime pinta

Pinta should now be v1.6.

You may also want to look at Krita, which is in the Debian repositories – good luck!

---

Tuesday, May 5, 2015

run Sublime Text from OSX commandline

OSX is a little annoying that it's not quite as easy to symlink Sublime Text to a "/bin/command".  In doing so, so all kinds of cruft console output occurs.

But i've figured a better way to launch Sublime Text via commandline:

In your ~/.bash_profile add the following function:

function sublime() {
open -a Sublime\ Text.app "$@"
}


----

Friday, May 1, 2015

BASH copy preserving timestamps in Linux and OSX




The cp command annoys me sometimes in that i expect my files to retain their time-stamps.  However, such is not the default.  To set this behavior automatically, aliases may be used.

In a Linux ~/.bashrc file, include the follwing alias:
alias cp="cp --preserve=timestamps"

In an OSX ~/.bash_profile, include the following alias:
alias cp="cp -p"

In both cases you will need to exit and restart bash.

---

Thursday, April 2, 2015

Automatic Malware IP Filters for NfSen


Below are my plain text notes for adding crontab based automatic malware filters in my CentOS based nfsen.

This was done with nfsen 1.3.6p1 and nfdump 1.6.6 -- i have not yet upgraded to any newer versions which may may be different.

Note that this takes into account my setup's file-structure -- yours may differ.

###############################
NFSEN NETFLOW AUTOMATED FILTERS
###############################

###############################
HOW TO
###############################
For each of the following names: "Malware-Domain-List", "Hostile_IPs", "ZeusBotNet_CC" (if you change the names, you will have to change the scripts)
Create new Profile
Group under "malware"
Description "Crontab enabled automatic filter" (and whatever other info you like to add, maybe the URLs from the scripts below)
no start date
no end date
default max size
default expire
1:1 channels
Shadow Profile
Sources: select all the sources you like.
Filter: temporarily use "not any"
[Create]
This will create a "blank" filter for each of your sources.
Now Create the following scripts, mark executable and run-once manually; Afterward, add them to crontab.
note: The *-filter.txt files (created by the gui) should be marked writable.


###############################
 ✓ root@netflow: /usr/local/nfsen/profiles-stat/malware $ find ./ -name "*.sh"
###############################
./Malware-Domain-List/import-list.sh
./Hostile_IPs/import-list.sh
./ZeusBotNet_CC/import-list.sh


###############################
✓ root@netflow: /usr/local/nfsen/profiles-stat/malware $ cat Hostile_IPs/import-list.sh
###############################
#!/bin/bash

export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin

cd /usr/local/nfsen/profiles-stat/malware/Hostile_IPs

printf "IP in [\n" > temp.txt
wget -qO- http://www.autoshun.org/files/shunlist.csv | tail -n +2 | awk -F, '{print $1}' >> temp.txt
printf "]\n" >> temp.txt

for f in *-filter.txt ; do
   cp temp.txt $f
done

rm temp.txt

#-rw-rw-r--. 1 apache apache *-filter.txt

###############################
 ✓ root@netflow: /usr/local/nfsen/profiles-stat/malware $ cat ./Malware-Domain-List/import-list.sh
###############################
#!/bin/bash

export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin

cd /usr/local/nfsen/profiles-stat/malware/Malware-Domain-List

printf "IP in [\n" > temp.txt
wget -qO- http://www.malwaredomainlist.com/hostslist/ip.txt >> temp.txt
printf "]\n" >> temp.txt

for f in *-filter.txt ; do
   cp temp.txt $f
done

rm temp.txt

#-rw-rw-r--. 1 apache apache *-filter.txt


###############################
 ✓ root@netflow: /usr/local/nfsen/profiles-stat/malware $ cat ./ZeusBotNet_CC/import-list.sh
###############################
#!/bin/bash

export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin

cd /usr/local/nfsen/profiles-stat/malware/ZeusBotNet_CC

printf "IP in [\n" > temp.txt
wget --no-check-certificate -qO- https://zeustracker.abuse.ch/blocklist.php?download=badips | tail -n +7  >> temp.txt
printf "]\n" >> temp.txt

for f in *-filter.txt ; do
   cp temp.txt $f
done

rm temp.txt

#-rw-rw-r--. 1 apache apache *-filter.txt


###############################
 ✓ root@netflow: /usr/local/nfsen/profiles-stat/malware $ crontab -l | tail -n 4
 ###############################
0 * * * * /usr/local/nfsen/profiles-stat/malware/Hostile_IPs/import-list.sh
0 * * * * /usr/local/nfsen/profiles-stat/malware/Malware-Domain-List/import-list.sh
0 * * * * /usr/local/nfsen/profiles-stat/malware/ZeusBotNet_CC/import-list.sh




---
As Always, Good Luck! You can thank me with bitcoin.    



Saturday, March 21, 2015

Linux Compatible Online Taxes


After leaving MS-Windows for good I had a hell of a time doing taxes. H&R Block online almost worked, but gave me problems AND wanted to charge more for processing my K-1.

Then I found TaxAct.com -- The best experience I've had doing taxes for the third year running. They are cheap and flawless even on Linux. Don't forget to whitelist the domain on any AdBlocker or ScriptBlocker you may use.

I absolutely recommend TaxAct.com.

---