Skip to main content

apache http server - Wordpress Permalinks not working in Nginx


I'm moving my wordpress blog from Apache to Nginx. I've tried multiple tutorials to get permalinks working but nothing is working for me. My website structure is like this:


main site -> www.localhost.com

wordpress blog -> www.localhost.com/blog

Website is in /var/www/html and wordpress is installed in /var/www/html/blog


I've read multiple articles and watched multiple videos but nothing seems to be working. Please let me know where I'm going wrong.


I've defined two server blocks in /etc/nginx/sites-available/default one for main site and one for the blog.


# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;

# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;

root /var/www/html;

# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;

server_name _;

charset utf-8;
error_page 404 /404.php;

location /article {
rewrite ^/article.* / redirect;
}

#location / {
# try_files $uri $uri/ /loadpage.php?$args;
#}

location ~ \.html$ {
try_files $uri /courses/index.php?$args;
}





# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#
# # With php7.0-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# With php7.0-fpm:
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
include fastcgi_params;

}

# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}



server {
listen 80;
listen [::]:80;
root /var/www/html/blog;
index index.php index.html index.htm;
server_name example.com www.example.com;

location /blog/ {
try_files $uri $uri/ /blog/index.php?$args;
}


location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
# fastcgi_pass unix:/var/run/php/php7.1-fpm.sock; #Ubuntu 17.10
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; #Ubuntu 17.04
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}

}

Answer



By creating multiple server blocks for the same domain you are effectively creating a collision between the virtual servers. server blocks define what Nginx will listen for. If there are multiple virtual servers for a given port, the request will be passed on to a virtual server with the best matching server_name. By having multiple identical virtual servers, Nginx will have no way of telling where to pass it to.


Your main site and WordPress are under the same domain (and the same port) and thus should go in a single virtual server. You separate them using location blocks.


Take caution with the Nginx block selection algorithm, regular expression blocks are always evaluated before normal blocks. In your example everything ending with .html is send to your main site. This may interfere with WordPress when it tries to use URI's ending with .html as well. To prevent this, you can use nested location blocks. By moving the regular expression location block inside / it will only be evaluated when the parent location matches. When a request comes in for /blog, the /blog location block will be the best matching one and therefore will be selected over the / one. Because the regular expression block is now nested inside there, it won't be evaluated for requests to WordPress.


# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;

# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;

root /var/www/html;

# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;

server_name _;

charset utf-8;
error_page 404 /404.php;

location /article {
rewrite ^/article.* / redirect;
}

location / {
location ~ \.html$ {
try_files $uri /courses/index.php?$args;
}

try_files $uri $uri/ /loadpage.php?$args;
}

location /blog/ {
try_files $uri $uri/ /blog/index.php?$args;
}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#
# # With php7.0-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# With php7.0-fpm:
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
include fastcgi_params;

}

# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}

Comments

Popular Posts

How do I transmit a single hexadecimal value serial data in PuTTY using an Alt code?

I am trying to sent a specific hexadecimal value across a serial COM port using PuTTY. Specifically, I want to send the hex codes 9C, B6, FC, and 8B. I have looked up the Alt codes for these and they are 156, 182, 252, and 139 respectively. However, whenever I input the Alt codes, a preceding hex value of C2 is sent before 9C, B6, and 8B so the values that are sent are C2 9C, C2 B6, and C2 8B. The value for FC is changed to C3 FC. Why are these values being placed before the hex value and why is FC being changed altogether? To me, it seems like there is a problem internally converting the Alt code to hex. Is there a way to directly input hex values without using Alt codes in PuTTY? Answer What you're seeing is just ordinary text character set conversion. As far as PuTTY is concerned, you are typing (and reading) text , not raw binary data, therefore it has to convert the text to bytes in whatever configured character set before sending it over the wire. In other words, when y...

linux - Extract/save a mail attachment using bash

Using normal bash tools (ie, built-ins or commonly-available command-line tools), is it possible, and how to extract/save attachments on emails? For example, say I have a nightly report which arrives via email but is a zip archive of several log files. I want to save all those zips into a backup directory. How would I accomplish that? Answer If you're aiming for portability, beware that there are several different versions of mail(1) and mailx(1) . There's a POSIX mailx command, but with very few requirements. And none of the implementations I have seem to parse attachments anyway. You might have the mpack package . Its munpack command saves all parts of a MIME message into separate files, then all you have to do is save the interesting parts and clean up the rest. There's also metamail . An equivalent of munpack is metamail -wy .

performance - Single Threaded Qaud Core v.s Hyper-Threading Dual Core

Let's say we have two CPUs, One is Quad Core 3.2 Ghz with 4 Cores, and We have a Dual Core 3.2 Ghz with 2 Cores with 2 threads in each Core (Hyper-Threading). My assumption as a programmer will be, the 4 cores 4 threads should perform faster than 2 cores 4 threads since the second CPU needs to switch between threads in order to emulate 4 cores while the first one doesn't need to perform such switching as each core can perform independently and individually. I want to confirm that my assumption is true, if not please explain why one is better than the other. Answer I do believe thats true - since hyper threading does share some elements - specifically the main execution resources, you'll be able to run 4 full threads at once, rather than waiting for those resources to be freed up. The point of HT is to get better performance with a smaller use of die area - your quad core would generally be a bigger chip - say almost twice as large, than a non HT dual core chip, while a HT...

freeze - How do I stop windows 8.1 from freezing when the screen locks

This happens to me on a regular basis if I leave the computer for upwards of 10 minutes. It didnt do so at first but started after a couple of days. This is possibly related to further windows updates although nothing seems to tie in obviously when looking at my update history. I have to hold the power button in to power off. If the screens have switched off aswell they wont come back on, if they haven't I see the login picture and can move the mouse pointer but nothing happens and no combination of keyboard mashes or mouse clicks lets me see the login prompt. In the event log (type event viewer into the start menu) under system before every Critical problem (me powering down the machine without restarting) I get distributedCOM errors talking about this guid: "The server {BF6C1E47-86EC-4194-9CE5-13C15DCB2001} did not register with DCOM within the required timeout." I also get the same error for this 1B1F472E-3221-4826-97DB-2C2324D389AE. This seems to be a common theme and...