Skip to main content

Why blindly piping a script into Bash is a bad idea

I’m a regular listener of Steve Gibson’s Security Now! podcast. The last several episodes (episodes 557, 558, and 559) have discussed the security implications of piping a script into Bash (or some other script interpreter, such as Ruby or Python). For example, the install instructions for Homebrew, a fantastic package manager for Macs that I use on my work laptop, offers a really slick 1-liner for installation:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

This idiom is so common that there’s an entire Tumblr site showing various examples.

So, why is this a bad idea?

Well, let’s take a step back. In the above Homebrew example, you can view the content of the script easily just by visiting https://raw.githubusercontent.com/Homebrew/install/master/install. You can read through the script, clear as day, in your browser to verify that there’s nothing fishy going on. Looks great, right?

Not so fast! Homebrew is hosted by GitHub, which we can trust. However, a number of other random scripts aren’t hosted by sites that we might inherently trust (just look at the Tumblr site for examples). If a script is hosted by a malicious site, it turns out they can easily trick the user into thinking the script is ok, when it’s really not. I’ve cooked up a simple example to show this.

Suppose I wrote a script that you might want to install:

bash <(curl -s https://research.gfairchild.com/bash_pipe/script.sh)

You can open up the script in your browser (https://research.gfairchild.com/bash_pipe/script.sh), you can see that it's clearly a friendly script, right? Wrong! If you actually run the script in bash, you get this:

$ bash <(curl -s https://research.gfairchild.com/bash_pipe/script.sh)
I am a bad script! :(

What happened?

I created a single simple PHP file, script.php. I also setup an .htaccess file to rewrite the URL so that it really does look like you're downloading a bash script. The magic happens in script.php:

<?php
    if(substr($_SERVER['HTTP_USER_AGENT'], 0, 4) === 'curl' or
       substr($_SERVER['HTTP_USER_AGENT'], 0, 4) === 'Wget')
        echo 'echo "I am a bad script! :("';
    else
        echo 'echo "I am a good script! :)"';
?>

Here's the .htaccess file:

RewriteEngine On
RewriteRule ^script\.sh$ script.php [NC]

Here, all I do is detect the HTTP request's user agent. In short, the user agent is tacked on to most HTTP requests and tells the server how it is being contacted. For example, I'm running Firefox 46, and my user agent string is this:

Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:46.0) Gecko/20100101 Firefox/46.0

curl and wget have their own user agents. My system curl and wget user agents are curl/7.35.0 and Wget/1.15 (linux-gnu), respectively. PHP makes it really easy to check the user agent, so all I have to do is check to see if the user is using curl or wget. If they are, I send them the malicious script. If they aren't using curl or wget, I show them the good script. That way, if a user checks the script in their browser but doesn't check it once it's downloaded using curl, my malicious attempt is successful!

So what do I do?

These sorts of 1-liner scripts aren't inherently bad. Installing Homebrew without their install script would be a nightmare! So what do you do?

Simple: download the file using curl or wget, inspect it on the machine it's going to be installed on, and then run the script. Don't take your browser's word for it. Keep in mind that the script might actually need to download/install more things (perhaps the script itself curls a file), so if you want to thoroughly vet a script, it may be a small rabbit hole. But it's obviously doable. In the case of Homebew, you'd do this:

$ wget https://raw.githubusercontent.com/Homebrew/install/master/install
$ vim install  # review the install script for any malicious content
$ /usr/bin/ruby install

Ultimately, as Steve Gibson says, we have to trust someone or else we'll never use a computer. However, it's important to recognize when attacks are relatively easy and prioritize spending time analyzing those situations. Obviously, I'm not going to break open my CPU and use a scanning electron microscope to ensure it's doing what it says it is. But there are small, relatively quick things we can do to help maintain a secure environment, and not blindly piping scripts into Bash is one of those things.

Installing the requirements for Pillow 3 on Debian

I’m working on installing Mezzanine, a CMS written in Django, for a project I’m working on. Mezzanine requires Pillow, an imaging library for Python. Pillow requires/recommends a number of libraries. It took me a little bit to figure out how to get (mostly) everything working on Debian 8.2 (Jessie). Here’s a command to install all of Pillow’s requirements in one fell swoop:

sudo aptitude install libjpeg62-turbo-dev libopenjpeg-dev libfreetype6-dev libtiff5-dev liblcms2-dev libwebp-dev tk8.6-dev

When I run pip install -v pillow, I see this in the output:

PIL SETUP SUMMARY
--------------------------------------------------------------------
version  Pillow 3.0.0
platform linux 3.4.2 (default, Oct  8 2014, 10:45:20)
 [GCC 4.9.1]
--------------------------------------------------------------------
*** TKINTER support not available
--- JPEG support available
*** OPENJPEG (JPEG2000) support not available
--- ZLIB (PNG/ZIP) support available
--- LIBTIFF support available
--- FREETYPE2 support available
--- LITTLECMS2 support available
--- WEBP support available
--- WEBPMUX support available
--------------------------------------------------------------------
To add a missing option, make sure you have the required
library, and set the corresponding ROOT variable in the
setup.py script.

Unfortunately, it doesn’t seem like Pillow recognizes TCL/TK (despite the fact that I installed tk8.6-dev, which includes tcl8.6-dev), so I can’t get Tkinter support working. I also installed OpenJPEG via libopenjpeg-dev, but the version in Debian seems to be too old:

$ aptitude show libopenjpeg-dev
Package: libopenjpeg-dev                 
State: installed
Automatically installed: no
Multi-Arch: same
Version: 1:1.5.2-3
Priority: extra
Section: libdevel
Maintainer: Debian PhotoTools Maintainers <pkg-phototools-devel@lists.alioth.debian.org>
Architecture: amd64
Uncompressed Size: 111 k
Depends: libopenjpeg5 (= 1:1.5.2-3)
Description: development files for OpenJPEG, a JPEG 2000 image library - dev
 OpenJPEG is a library for handling the JPEG 2000 image compression format. JPEG 2000 is a wavelet-based image compression standard and permits progressive transmission by pixel and resolution accuracy for progressive downloads of an encoded image. It supports lossless and lossy compression, supports higher compression than JPEG 1991, and has resilience to
 errors in the image. 
 
 This is the development package
Homepage: http://www.openjpeg.org

Tags: devel::library, role::devel-lib

The Pillow docs state that version 2.0.0 and 2.1.0 are supported, so 1.5.2-3 must be too old.

If someone knows how to get Tkinter or OpenJPEG working, please let me know in the comments! I don’t think it’ll matter much in the end, but it’d be nice to have all of Pillow’s functionality available.

UPDATE: I was able to get Tkinter working with the help of the Pillow devs. Issue #1473 has the full discussion, but the main takeaway is that I had to install python3-tk, which enables Tkinter support.

Additionally, the Pillow docs actually contain a Building on Linux section that I missed before. It more or less echoes what I lay out in this blog post. This is the final command I had to use:

sudo aptitude install libjpeg62-turbo-dev libopenjpeg-dev libfreetype6-dev libtiff5-dev liblcms2-dev libwebp-dev tk8.6-dev python3-tk

Unfortunately, OpenJPEG still isn’t supported, but that’s just because Pillow requires a newer version than is contained in the Debian repos; build it from source if you need it, and you should be good to go.

How to change a remote repository URL in Git

I just ran into a situation where I needed to change a remote URL for a personal repository in Git. The project lived on a server at work, but I’m going to be going out of town for several weeks starting tomorrow. I need this project, and unfortunately, I can’t access it from home due to the work firewall.  What I decided to do is just move the repo to my personal server for now. Here’s how I did it (if it’s not obvious, I work over SSH).

First, I just wanted to see the current configuration:

~/Documents/project> git remote show origin
* remote origin
  Fetch URL: olduser@oldserver.com:/path/to/project.git
  Push  URL: olduser@oldserver.com:/path/to/project.git
  HEAD branch: master
  Remote branch:
    master tracked
  Local branch configured for 'git pull':
    master merges with remote master
  Local ref configured for 'git push':
    master pushes to master (up to date)

Next, I need to SSH into the new server and create a new bare repo into which I’ll push my project. Since I store my git projects in /srv/git, I need to make sure I give the appropriate ownership to the project.

~$ cd /srv/git/
/srv/git$ sudo mkdir project.git
/srv/git$ sudo chown newuser:newuser project.git/
/srv/git$ cd project.git/
/srv/git/project.git$ git init --bare
Initialized empty Git repository in /srv/git/project.git/

The new server is now ready. All that’s left is for me to change the remote repo URL of the project on my local machine and then just push the project to the new server.

~/Documents/project> git remote set-url origin newuser@newserver.com:/srv/git/project.git
~/Documents/project> git push
Counting objects: 37567, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (37556/37556), done.
Writing objects: 100% (37567/37567), 88.91 MiB | 3.76 MiB/s, done.
Total 37567 (delta 4931), reused 0 (delta 0)
To newuser@newserver.com:/srv/git/project.git
 * [new branch]      master -> master

That’s it! All pushes/pulls from now on will happen with the new server. Pretty easy!

Simple Unix find/replace using Python

Find/replace in Unix isn’t very friendly. Sure, you can use sed, but it uses fairly nasty syntax that I always forget:

sed -i.bak s/STRING_TO_FIND/STRING_TO_REPLACE/g filename

I wanted something really simple that’s more user-friendly. I turn to Python:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/usr/bin/env python
 
"""
    Replace all instances of a string in the specified file.
"""
 
import argparse
import fileinput
 
#deal with command line arguments
argparser = argparse.ArgumentParser(description='Find/replace strings in a file.')
argparser.add_argument('file', type=str, help='file on which to perform the find/replace')
argparser.add_argument('find_string', type=str, help='string to find')
argparser.add_argument('replace_string', type=str, help='string that replaces find_string')
args = argparser.parse_args()
 
for line in fileinput.input(args.file, inplace=1):
    print line.replace(args.find_string, args.replace_string), #trailing comma prevents newline

That’s it. Toss this into a file called find_replace.py and optionally put it on your PATH. Here’s an example where I replace all instances of <br> with <br/> in an HTML file:

find_replace.py index.html "<br>" "<br/>"

Here’s an example where I use GNU Parallel to do the same find/replace on all HTML files in a directory:

find . -name *.html | parallel "find_replace.py {} '<br>' '<br/>'"

Much more user-friendly than sed!

This certainly works, and the code is incredibly simple, but fileinput is really geared towards reading lots of files. Perhaps more important is that there’s no error handling here. I could (and probably should) surround lines 17 and 18 with try-except, but I much prefer using with for file I/O. Unfortunately, with support for fileinput wasn’t added until Python 3.2 (I’m using 2.7). And personally, I think that while the inplace parameter is pretty cool, it’s dangerous because it’s not particularly intuitive. A better, although slightly longer, solution is to manually read in the file all at once, write out changes to a temp file, and then copy the temp file’s contents. Here’s a more “proper” solution:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#!/usr/bin/env python
 
"""
    Replace all instances of a string in the specified file.
"""
 
import argparse
import tempfile
from os import fsync
 
#deal with command line arguments
argparser = argparse.ArgumentParser(description='Find/replace strings in a file.')
argparser.add_argument('file', type=str, help='file on which to perform the find/replace')
argparser.add_argument('find_string', type=str, help='string to find')
argparser.add_argument('replace_string', type=str, help='string that replaces find_string')
args = argparser.parse_args()
 
#open 2 files - args.file for reading, and a temporary file for writing
with open(args.file, 'r+') as input, tempfile.TemporaryFile(mode='w+') as output:
    #write replaced content to temp file
    for line in input:
        output.write(line.replace(args.find_string, args.replace_string))
    #write all cached content to disk - flush followed by fsync
    output.flush()
    fsync(output.fileno())
    #go back to beginning to copy data over
    input.seek(0)
    output.seek(0)
    #copy output lines to input
    for line in output:
        input.write(line)
    #remove any excess stuff from input
    input.truncate()

This code uses with, so error-handling is implicit, and it’s written specifically to handle a single file (unlike fileinput), so it should be more efficient.

Compared to sed, this doesn’t currently allow for regular expressions, but that would be fairly trivial to add in; perhaps an extra command-line argument indicating that find_string is a regular expression should be added.

Securing against BEAST/CRIME/BREACH attacks

July 11, 2016 update: Simplify your life and just use Let’s Encrypt. It’s brain dead simple to use and automatically configures everything for you. The default security settings are essentially identical to Mozilla’s intermediate compatibility TLS settings (see options-ssl-apache.conf).

October 18, 2014 update: This information is outdated. Mozilla’s Security/Server Side TLS guide is much more comprehensive and should be used instead. It addresses BEAST, CRIME, BREACH, and POODLE and is consistently updated as new vulnerabilities are discovered.

I maintain a domain that requires SSL. It’s been using the standard 1024-bit keys that OpenSSL generates with standard Apache VirtualHost entries. After the various TLS exploits that have been revealed over the last few years, I spent some time looking into locking down my site.

First, I generate strong RSA keys. Very strong. 2048-bit keys are the current standard, but I opted for 4096-bit keys. No attack has been shown on 2048-bit keys, and 4096-bit keys have slightly more overhead, but I don’t mind; luckily, Linode (my host) just recently upgraded all CPUs. Security is all I care about, and a little CPU overhead is worth it.

First, I create the 4096-bit key/cert that Apache uses for self-signed certs:

cd /etc/apache
sudo mkdir ssl
cd ssl
sudo openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout [private_key_name].key -out [certificate_name].pem
sudo chmod 600 *

Then, I instruct Apache to use them. My VirtualHost file looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<VirtualHost [IPv4_address] [IPv6_address]:80>
	ServerName [domain].com
	Redirect permanent / https://[domain].com/
<VirtualHost>
 
<VirtualHost [IPv4_address] [IPv6_address]:443>
	ServerAdmin [admin_email]
	ServerName [domain].com
	DocumentRoot /srv/www/[domain].com/public_html/
	ErrorLog /srv/www/[domain].com/logs/error.log
	CustomLog /srv/www/[domain].com/logs/access.log combined
 
	SSLEngine On
	SSLCertificateFile /etc/apache2/ssl/[certificate_name].pem
	SSLCertificateKeyFile /etc/apache2/ssl/[private_key_name].key
	SSLHonorCipherOrder On
	SSLCipherSuite ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH
	SSLProtocol -ALL +TLSv1
	SSLCompression Off
</VirtualHost>

That’s it. Restart Apache, and that’s all it takes. It’s the last few lines that really lock it down:

16
17
18
19
SSLHonorCipherOrder On
SSLCipherSuite ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH
SSLProtocol -ALL +TLSv1
SSLCompression Off

These lines specify the cipher suite (suite of encryption and authentication algorithms the browser and server are allowed to use) as well as the SSL protocols used.

The SSLHonorCipherOrder and SSLCipherSuite recommendation comes from http://blog.ivanristic.com/2011/10/mitigating-the-beast-attack-on-tls.html (Ivan Ristić was the original developer of mod_security and is very active in the SSL world). These lines tell the browser which cipher suites, in order, it should prefer. As browser security improves (most browsers are still lagging behind in TLS 1.2 support, for example), this list/ordering will likely change to support stronger cipher suites.

The SSLProtocol line is a common one for only allowing TLS v1 or higher. SSL v2 is flawed in several serious ways and should be disallowed. SSL v3 is considered less secure than TLS v1+. All modern browsers support TLS v1, so I’m not alienating any users here.

The SSLCompression line is important for preventing the BREACH and CRIME attacks, which take advantage of SSL compression. This line only affects Apache 2.2+.

Finally, when all is said and done, you can visit Qualys SSL Labs to test the security of your site. If you’re using a self-signed certificate like mine, you’ll always get a failing grade. This is because the certificate isn’t trusted. This isn’t a big deal for my purposes; what’s important are the protocol support, key exchange, and cipher support ratings. Using the configuration above, I currently get at least 90 on these three of these ratings.

Ivan’s recent post, Configuring Apache, Nginx, and OpenSSL for Forward Secrecy, should also be noted here. Of special note is the section on RC4 vs BEAST:

Today, only TLS 1.2 with GCM suites offer fully robust security. All other suites suffer from one problem or another (e.g, RC4, Lucky 13, BEAST), but most are difficult to exploit in practice. Because GCM suites are not yet widely supported, most communication today is carried out using one of the slightly flawed cipher suites. It is not possible to do better if you’re running a public web site.

The one choice you can make today is whether to prioritize RC4 in most cases. If you do, you will be safe against the BEAST attack, but vulnerable to the RC4 attacks. On the other hand, if you remove RC4, you will be vulnerable against BEAST, but the risk is quite small. Given that both issues are relatively small, the choice isn’t clear.

However, the trend is clear. Over time, RC4 attacks are going to get better, and the number of users vulnerable to the BEAST attack is going to get smaller.

The reason I don’t use Ivan’s new suggestions is because these suggestions require Apache 2.4+. I’m using Ubuntu 12.04 LTS, which ships with Apache 2.2. When 14.04 LTS comes out, then I’ll likely transition to his crypto scheme.

logrotate: Use it!

I was digging through my virtual hosts looking at log files and noticed that a few of them had pretty massive access logs. One of the more popular sites I run for some friends, American K-Pop Fans, had an access log of 363mb, and I’ve only been running the site for a few weeks! That obviously wasn’t going to work, so I started looking up how to manage log files. I noticed that Apache seemed to do an awesome job of keeping logs organized in /var/log/apache2/ and figured I should be able to model my log cleanliness after them.

After some Googling, I stumbled onto rotatelogs. After fumbling with it, although it’s pretty cool, I discovered that this wasn’t quite what I wanted. I did some more Googling and discovered logrotate, a program built into Linux for managing large amounts of logs. The two almost identical names confused me at first, but the difference became clear pretty fast.

rotatelogs is a really simple program for automatically breaking log files apart when Apache adds an entry. It does this by piping the log file entry through the program which then decides if it needs to create a new log file or can use the existing one. You can choose when to create a new log file based on time or log size. logrotate, on the other hand, is a Linux command-line utility which runs as a cron job every day. It runs all scripts in /etc/logrotate.d/. Looking in that directory, there’s an apache2 script which keeps Apache’s log files nice and tidy. There are also a variety of others, depending on what’s installed.

Both Linode’s logrotate article and Slicehost’s logrotate article helped me setup logrotate for my virtualhosts. Here’s what mine looks like:

1
2
3
4
5
6
7
8
9
10
/srv/www/*/logs/*.log {
	rotate 52
	weekly
	compress
	delaycompress
	sharedscripts
	postrotate
		apache2ctl graceful
	endscript
}

The idea is pretty simple. Line by line:

  1. I list all of my virtual host log paths. All of my virtual hosts follow the same directory structure, so I can get away with wildcard usage like this.
  2. I tell it that I want to keep 52 weeks of previous log files.
  3. I tell it that I want it to run weekly.
  4. I tell it that I want old log files compressed to save space.
  5. I tell it that it should delay compressing the most recent archived log file.
  6. I tell it that all of the virtual hosts listed on line 1 should be processed before the following script runs.
  7. I tell it to restart Apache gracefully (no open connections will be closed, and old log files won’t be closed immediately). The reason we use delaycompress is because we don’t want to compress the most recent log file until we’re sure Apache is done with it.

That’s it! This simple script maintains all my log files for me so that I don’t have to worry about them growing out of control.

aptitude install php5-sqlite

This is kind of a mental note, but in order to get SQLite to work with PHP5 under Ubuntu 10.04 LTS, it’s necessary to install the required libraries:

aptitude install php5-sqlite

I’ve been trying to figure out an error regarding my sentinel surveillance site calculator. It uses an SQLite database on the back-end (the same one I provide in this post), and the page was only half loading. As soon as it got to the SQLite calls, it’d just die. The code doesn’t run on my server, and I couldn’t view the logs, so it was kind of tricky to diagnose it. After moving the code over to my server, I very quickly discovered that it was just a lack of the proper PHP SQLite libraries causing the issue. Part of the problem is that the PHP documentation on SQLite3 is extremely vague and makes it sound like PHP ships with SQLite support, so I never thought that might be the issue. Had I just done a simple phpinfo() lookup, this would’ve been painfully obvious. Oops!

Also, I wholeheartedly endorse the Xerial SQLiteJDBC library written by Taro L. Saito for using SQLite databases in Java. It’s very fast, and I haven’t had any problems with bugs. And best of all, unlike Zentus’ SQLiteJDBC library, it’s regularly maintained and updated.

WordPress auto update/plugin install FTP error?

New Hosting!

I’ve had web hosting basically since I got into undergrad. I started off using a few free hosts but outgrew those pretty quick. Then I got an account at Dreamhost. They were cheap and did the trick for 3 or 4 years. But, my needs grew. I needed PostgreSQL for some work I was doing for a research assistantship here at UI. Looking around at shared hosts, Media Temple seemed like the next logical choice. Their control panel was pretty nice and flexible, but their hosting wasn’t very fast.  They have been promising a Cluster Server to succeed their Grid for years now, but it’s never happened.  In fact, any updates/progress at Media Temple happened extremely slowly.  It wasn’t until mid-2009 that they finally transitioned to MySQL 5!

Fast forward to this weekend. My MT hosting is set to expire in a few days, so I figured it was time to move on. After looking around, I decided to go with Linode VPS hosting. Their VPS service is exactly what a computer science grad student needs. Their control panel rocks, their documentation rocks, and their community rocks. Speed is excellent (much faster than Media Temple’s Grid). I’m fairly new to the whole *nix server administration thing, so I figured it was time to get my hands wet. I have quite a bit of experience with Ubuntu’s desktop distros, so I chose to go with Ubuntu 10.04 (Lucid Lynx) LTS as my server OS. Having root access is a really great thing.

I spent this weekend setting up, configuring, and securing my server for LAMP stuff. Then I began the migrating process. Luckily, I don’t have much I needed to migrate – just a “simple” WordPress 3.1 install. I put simple in quotes because it turned out to be a little bit of a pain. The initial install and migration was very straightforward. But I had a few issues when I decided to use the built-in plugin installer. What I saw was a screen asking for my FTP login credentials. After a little digging around, this turned out to be because of some wonky permissions problems. I’m writing this blog entry just in case anyone in the future finds it necessary because when Googling, I discovered a lot of the solutions were insecure or sucked.

The Problem

The gist of the problem is this:

  • WordPress checks to see if it has permissions to write files by calling getmyuid() and comparing that UID to the UID of the running process.
  • getmyuid() returns the UID of the script’s owner.  This is almost certainly the Linux user that installed WordPress.
  • Apache instances run as user www-data (on other systems, this may be different).
  • Unless the script’s owner is www-data, these two UIDs will not be the same, so WordPress will decide it doesn’t have the correct permissions.

There are two primary ways to solve this problem.

Solution 1: suPHP

suPHP is a handy Apache module for allowing PHP scripts to execute as the same owner as the script.  It took quite a bit of tinkering to get it working (its documentation is awful), but it did indeed work.  This may seem like the perfect solution at first glance, but there are a few reasons why I decided against it.

  • suPHP replaces mod_php.  If you have multiple VirtualHost entries, suPHP will run on all of these instances.  I’m sure it’s probably possible to enable suPHP on certain sites and mod_php on others, but it’s not trivial.
  • suPHP is reportedly 20-25% slower than mod_php.
  • suPHP can break stuff.  While reading about how to setup/configure suPHP, I ran across a bunch of instances of it breaking phpMyAdmin sites.

suPHP is neat and definitely has applications in a shared hosting situation, but I decided it wasn’t for me.  I wanted to stick with mod_php.

Solution 2: Set Appropriate Permissions

This is the solution I ended up settling on after reading WordPress’ permissions guide and WordPress’ hardening guide. Specifically, this section:

All files should be owned by your user account, and should be writable by you. Any file that needs write access from WordPress should be group-owned by the user account used by the web server.

So what we need to do is group-own the files/directories that WordPress needs access to:

chgrp -R www-data ~/public_html/wordpress/

This gives your web server group permissions on all files/directories in the WordPress directory. I give WordPress group privileges on all files/directories because when WordPress automatically updates, it could potentially change all of these files/directories. You still remain the owner of all file/directories. Note that by default the group user will only have read access to files and only read and execute access on directories. The group user needs to have write access also (it needs to have the same permissions as the user). I change all files to 664 and all directories to 775:

find ~/public_html/wordpress/ -type d -exec chmod 775 {} ;
find ~/public_html/wordpress/ -type f -exec chmod 664 {} ;

Now, the web server has the correct permissions on all WordPress files/directories.

However, we aren’t quite done yet.  If you remember back to the initial problem, WordPress is calling a PHP function that returns the script’s owner’s UID, not the script’s group’s UID.  If you try to update/install something at this point, you’ll still get the FTP credentials error.  There’s one simple fix that needs to be applied.  In wp-config.php, simply add this line:

define('FS_METHOD', 'direct');

This line, kinda described in WordPress’ wp-config.php guide, allows WordPress to bypass the getmyuid() call and simply attempt to write the specified file.  This will fail and will likely add a line in your error logs if the permissions are incorrectly set, but if they’re set properly (which they will be if you followed the above instructions), it’ll finally solve our problem.

That’s it! You can now install plugins and themes from the WordPress Dashboard, and you can now automatically update WordPress.