Use Composer with Kohana 3.3

Kohana 3.3 has a file named “composer.json” in the root of the project, however it is not configured for Kohana and Composer is not installed.

This example is tested on a Mac. Windows users will need to install Composer from the instructions on the Composer project site, at http://getcomposer.org/doc/00-intro.md.

Install Composer

First, modify composer.json to tell composer where to install libraries.

to:

{
	"config": {
		"vendor-dir": "application/vendor"
	},
	"require": {
		"phpunit/phpunit": "3.7.24",
		"phing/phing": "dev-master"
	}
}

Open a terminal and navigate to the root of your Kohana project.

curl -sS https://getcomposer.org/installer | php
php composer.phar install

Modify Kohana’s bootstrap.php File

Add this to bootstrap. Right above the router configuration is a good spot.

/**
 * Autoload composer libraries
 * 
 */
require APPPATH . 'vendor/autoload.php';

Adding a Library

You can either edit composer.json with your required library (and then re-run the install command above), or use the composer require verb from the command line to add the library, which also modifies composer.json.

php composer.phar require "monolog/monolog" "1.6.0"

The monolog package page is https://packagist.org/packages/monolog/monolog. Note that you need the package name for the first parameter and the version for the second parameter.

Drop MySQL Tables by Partial Name

This is an excerpt from Adminbuntu, a site for Ubuntu Server administrators:

http://www.adminbuntu.com/drop_mysql_tables_by_partial_name

IMPORTANT! First back up your database!

This procedure will allow you to drop many tables at once where each table name to be dropped starts with the same string.

There are two steps in the procedure:

  • Create a MySQL statement file containing all the DROP commands called drop_commands.sql
  • Run the drop_commands.sql file

1. Create drop_commands.sql File

This creates a MySQL statement file that will drop all tables that begin with a specified string.

  • Replace STRING1 with the string to match
  • Replace USERNAMEHERE with the MySQL user to use
  • Replace PASSWORDHERE with the correct password
  • Replace DATABASENAMEHERE with the name of the database
mysql --user=USERNAMEHERE --password=PASSWORDHERE -e "SELECT table_name FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME LIKE 'STRING1%' AND TABLE_SCHEMA='DATABASENAMEHERE' " | grep -v table_name | xargs -L 1 echo "DROP TABLE " | sed "s/\$/;/" | sed -e '1 i SET FOREIGN_KEY_CHECKS = 0;'| sed -e '$s@$@\nSSET FOREIGN_KEY_CHECKS = 1;@' >drop_commands.sql

2. Execute the drop_commands.sql Command File

Examine drop_commands.sql to make sure it is doing what you want.

less drop_commands.sql

Run the drop_commands.sql text file through the mysql interpreter to drop all the selected tables.

mysql --user=USERNAMEHERE --password=PASSWORDHERE DATABASENAMEHERE < drop_commands.sql

Drop All Tables in a MySQL Database

This is an excerpt from Adminbuntu, a site for Ubuntu Server administrators:

http://www.adminbuntu.com/drop_all_tables_in_a_database

IMPORTANT: This is a dangerous command. Back up the database first!

Drop all tables in a database, without dropping the database itself.

This creates a MySQL statement file that will drop all tables that begin with a certain string.

  • Replace USERNAMEHERE with the MySQL user to use
  • Replace PASSWORDHERE with the correct password
  • Replace DATABASENAMEHERE with the name of the database
mysql --user=USERNAMEHERE --password=USERNAMEHERE -BNe "SHOW TABLES" DATABASENAMEHERE | tr '\n' ',' | sed -e 's/,$//' | awk '{print "SET FOREIGN_KEY_CHECKS = 0;DROP TABLE IF EXISTS " $1 ";SET FOREIGN_KEY_CHECKS = 1;"}' | mysql --user=USERNAMEHERE --password=USERNAMEHERE DATABASENAMEHERE

Upgrading Ubuntu Server to lastest LTS version

From Adminbuntu, Everything for the Ubuntu Server Administrator

This is an excerpt from Adminbuntu, a site for Ubuntu Server administrators:

http://www.adminbuntu.com/upgrading_ubuntu_server_to_lastest_lts_version

If possible, don’t use SSH when upgrading a server. On Linode, you can use their Lish terminal, available from the virtual server’s console page.

This was tested while upgrading a 10.04 LTS Ubuntu Server to 12.04 LTS. The test server was a production web server with a large number of packages installed and configuration changes.

Back up the Server First

If your virtual hosting provider offers image backups, this is a good option. The important thing is knowing for certain that you can restore/recreate the server in case the upgraded server is not left in a bootable, usable condition.

Install the Upgrade Manager

sudo aptitude -y install update-manager-core

Double-check Configuration File

Run this command to check whether “/etc/update-manager/release-upgrades” has the line “Prompt=lts”.

[[ `grep Prompt=lts /etc/update-manager/release-upgrades` = 'Prompt=lts' ]] && echo '"/etc/update-manager/release-upgrades" is Ok' || echo 'Edit /etc/update-manager/release-upgrades and add line "Prompt=lts"'

If the line is not present edit “/etc/update-manager/release-upgrades” with:

sudo vi /etc/update-manager/release-upgrades

…and add the line:

Prompt=lts

Run the Upgrade Manger

sudo do-release-upgrade

Follow the on-screen instructions.

When this was tested on a production server:

  • The upgrade went smoothly.
  • When prompted for a new MySQL root password (sever times during the upgrade) Enter was pressed without entering a new password. The existing MySQL password was retained by the server with no issues.
  • When the upgrade manager encountered a configuration file with custom changes, the existing, modified configuration file was retained (not replaced with the distribution default configuration file). This worked well. The only change needed after upgrading was adding a new line to phpMyAdmin’s configuration file that was needed for the new phpMyAdmin version.

Google PageSpeed Apache Module on Ubuntu Server

From Adminbuntu, Everything for the Ubuntu Server Administrator

This is an excerpt from Adminbuntu, a site for Ubuntu Server administrators:

http://www.adminbuntu.com/google_pagespeed_module_on_apache

PageSpeed speeds up your website and reduces page load time. The PageSpeed Apache module automatically applies web performance best practices to pages and assets like CSS, JavaScript and images, without requiring site modification.

PageSpeed Project Page: https://developers.google.com/speed/pagespeed/module

Installation

Determine Whether 32-bit or 64-bit Ubuntu Server is Installed

if [[ `uname -a` == *_64* ]] ; then echo '64-bit' ; else echo '32-bit' ; fi

Download the PageSpeed Apache Module

  • Using your web browser, navigate to https://developers.google.com/speed/pagespeed/module/download.
  • Look on the right side of the page, under Latest Stable Version and right-click on either the 32-bit or 64-bit .deb package (based on whether you are running 32-bit or 64-bit Ubuntu Server. Select Copy Link Address to get the download link in your clipboard.
    In your server’s terminal use wget to download the module.

This this example, the link for the 32-bit .deb file is https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_i386.deb:

wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_i386.deb

Install the PageSpeed Apache Module

sudo dpkg -i mod-pagespeed-*.deb
sudo apt-get -f install
sudo service apache2 restart

Test Installation

You can verify that the PageSpeed module is installed and enabled with:

[[ -a '/etc/apache2/mods-enabled/pagespeed.conf' ]] ; then echo 'pagespeed is enabled' ; else echo 'pagespeed is not enabled' ; fi

Flushing PageSpeed Server-Side Cache

When developing web pages with PageSpeed enabled, it is sometimes necessary to flush the server’s PageSpeed cache to get the system to reload CSS or JavaScript files that have been updated before the cache lifetime expires.

To do this, touch the file cache.flush:

sudo touch /var/cache/mod_pagespeed/cache.flush

http://www.adminbuntu.com/

Delete Old Files on Amazon AWS S3

If you have a script to automatically back up to Amazon S3 from your server, it is good to limit the age of stored backups.

This bash script allows you to use s3cmd to do just that. You specify the bucket to process and the age os files to retain.

Important: You must first install and configure s3cmd.

#!/bin/bash
 
usage (){
  echo " "
  echo Usage: s3-del-old "bucketname" "time"
  echo Example: s3-del-old \"mybucket\" \"30 days\"
  echo " "
  echo "Do not include a leading slash in bucketname."
  echo " "
}
 
# if incorrect # parameters, show usage
if [ $# -lt 2 ]; then
  usage
  exit 2
elif [ $# -gt 2 ]; then
  usage
  exit 2
fi
 
# don't allow leading slash in bucketname
firstchar=${1:0:1}
if [ $firstchar = "/" ]; then
  echo "ERROR: Do not start bucketname with a slash."
  usage
exit 2
fi
 
# don't allow "s3:" in beginning of filename
teststring=${1:0:3}
teststring=${teststring,,}
if [ $teststring = "s3:" ]; then
  echo "ERROR: Do not start bucketname with \"s3:\""
  usage
exit 2
fi
 
# transform first parameter into fully formed s3 bucket parameter with trailing slash star
target='s3://'${1%/}'/*'
 
s3cmd ls $target | while read -r line;
do
  create_date=`echo $line | awk '{print $1,$2}'`
  create_date_unixtime=`date -d"$create_date" +%s`
  older_than_unixtime=`date -d"-$2" +%s`
  if [[ $create_date_unixtime -lt $older_than_unixtime ]]
  then
    filename=`echo $line|awk '{print $4}'`
    if [[ $filename != "" ]]
    then
      echo deleting $filename $create_date
      s3cmd del $filename
    fi
  fi
done;

Use Chrome securely from Starbucks via SSH SOCKS

Do you have a server that you can access with OpenSSH? Do you want to be able to browse the web, even non-SSL, unencrypted pages, without others on the network being able to see what you’re looking at or even hijacking your sessions? Given the existence of Firesheep, it is really easy for even unsophisticated users to hijack a web browsing session.

The method I’m presenting is easy and effective. OpenSSH makes this a snap. Your web browsing packets will be routed via an encrypted connection to your server.

Create a SOCKS Proxy Connection on localhost

First, in a terminal, open a SOCKS connection to your server with OpenSSH. Just add “-D 9999″ to your normal SSH command. This will create a SOCKS proxy on localhost at port 9999.

ssh -D 9999 username@myserver.com

Depending on your configuration, you may need to enter your server account password. Whatever your normal authentication is for SSH. This will even open a normal SSH session, you will get a shell prompt on the server like normal. If you do not want a shell prompt, use “-ND” instead of “-D” and the

You now have a proxy on your local computer using SOCKS on port 9999. Now we just need to use it.

Install Switchy! in your Chrome browser

This can be easily found in the Chrome Web Store.

Use your Shiny New SOCKS Proxy

Open the Switchy! Options dialog. Type the a name for this proxy in Profile Name. On the SOCKS Host line, enter “localhost” in the first blank and “9999” for the Port. Click the Save button. You are using a SOCKS proxy running on your localhost.

Then, select the proxy by clicking the Switchy! icon in Chrome and selecting the proxy name you just entered.

You are now using a secure connection to browse the web. Note that someone on the network where your server is hosted can still snoop your traffic, but not in the Starbucks where your are sitting.

View/download a file not in the public directory with PHP

When you need to give users access to files that are not in the public directory, where you cannot simply use an anchor tag with an “href”, you need to do a bit of work. For example, if you’ve created an authentication system where only authenticated users can download or view a file, this can be necessary.

$filename = 'something';
$file_path = '/somepath/to/the/file/';
$file_fullpath = $file_path . $filename;
//
//
if (!file_exists($file_fullpath)) {
    header("HTTP/1.0 404 Not Found");
    return;
}
//
// get the mime type
$finfo = finfo_open(FILEINFO_MIME_TYPE); 
$mimeType = finfo_file($finfo, $file_fullpath);
//
// calc values for header
$size = filesize($file_fullpath);
$time = date('r', filemtime($file_fullpath));
$fm = @fopen($file_fullpath, 'rb');
if (!$fm) {
    header('HTTP/1.0 505 Internal server error');
    return;
}
$begin = 0;
$end = $size;
if (isset($_SERVER['HTTP_RANGE'])) {
    if (preg_match('/bytes=\h*(\d+)-(\d*)[\D.*]?/i', $_SERVER['HTTP_RANGE'], $matches)) {
        $begin = intval($matches[0]);
        if (!empty($matches[1])) $end = intval($matches[1]);
    }
}
//
// create http header
if ($begin > 0 || $end < $size) {
    header('HTTP/1.0 206 Partial Content');
}else {
    header('HTTP/1.0 200 OK');
}
header('HTTP/1.0 200 OK');
header("Content-Type: $mimeType");
header('Cache-Control: public, must-revalidate, max-age=0');
header('Pragma: no-cache');
header('Accept-Ranges: bytes');
header('Content-Length:' . ($end - $begin));
header("Content-Range: bytes $begin-$end/$size");
header("Content-Disposition: inline; filename=$filename");
header("Content-Transfer-Encoding: binary\n");
header("Last-Modified: $time");
header('Connection: close');
// output the file
$cur = $begin;
fseek($fm, $begin, 0);
while (!feof($fm) && $cur < $end && (connection_status() == 0)) {
    print fread($fm, min(1024 * 16, $end - $cur));
    $cur+= 1024 * 16;
}

There you have it. Your user can click the link for the file, which then runs this code. In my case, this is in a Kohana controller.

The files are not stored in a public directory, so unauthenticated users cannot access them.

If a file is created on the fly, like a PDF of an invoice, this will also work.