![Fingers clicking on a website template.](https://fatstacksblog.com/wp-content/uploads/2021/06/click-website-template-june142021.jpg)
For a couple years now I’ve talked about how much I like using carefully chosen taxonomies to help navigation on my sites.
I limit categories but am liberal with tags.
I sometimes create custom taxonomies as well for custom post types. Not often but sometimes.
One thing I do differently than most is I index almost every archive page – categories, tags and custom taxonomy archives.
When my sites get to a certain size, I also create a “Topics” page which is a nicely designed sitemap. I don’t include links to every URL on the site, but quite a few.
My sites are a sieve for visitors and the Google crawl bot.
Many SEOs talk about carefully funneling link juice to key pages.
I think it’s called link sculpting or page rank sculpting.
I don’t like the concept at all.
I liken it to a pressure washer, but I can’t help but envision a burst pipe.
![Burst pipe](https://fatstacksblog.com/wp-content/uploads/2021/06/burst-pipe-june142021.jpg)
I prefer visitors and crawl bots to flow easily and uninhibited through every nook and cranny of my sites.
Like sprinklers watering a large lawn:
![Lawn sprinkles](https://fatstacksblog.com/wp-content/uploads/2021/06/lawn-sprinkles-june142021.jpg)
Do I know for certain my approach is best for SEO?
Not at all.
Many reading this might be thinking “but Jon, you can provide plenty of nagivation with archive pages etc. and still noindex those pages to control link juice.”
Yes, that’s true. Many SEOs suggest that you do this.
And while there are some archive pages I do noindex, it’s not many and they’re usually those with not many articles listed.
But it still doesn’t make sense to noindex archive pages (even tag archives) by default, especially for the types of sites I publish.
I’m not gunning to rank one article on my site. There’s no reason for me to sniper all link juice to one URL.
My goal is to grow overall authority throughout the site so that many (thousands) articles, existing and future, benefit.
I’m of the view, and it’s just a hunch, that Google likes navigation, including archive pages.
So that’s what I do.
That doesn’t mean I don’t noindex stuff.
I actually noindex a lot of content. I do so when it’s thin. Maybe it’s a list of products/affiliate links with little text. A blatant bridge page I drive traffic to. Google has made it clear it’s not a fan of those pages so I noindex them.
Or maybe I have some clickbait article that’s a list of images with no text. Again, it’s thin. I noindex it.
And then there are articles that are syndicated from other sites. I noindex those as well. I publish them for readers, social and other reasons.
But archive pages that help visitors navigate… I index those because I like the idea of bots having an easy time of flowing through my sites.
How do I know when to create a new archive page?
I ask myself whether it would help readers? If I have 4 or 5 articles that could use the same tag and it’s a stand-alone topic, I create the tag and index the tag archive.
Perhaps I’m naive, but this is how I do it and it’s served me well.
And no, most of my archive pages don’t rank for search terms. A few do, but most don’t. I actually thought I’d rank more of them, but I haven’t. But that doesn’t mean they don’t provide benefits sitewide. I believe they do.
Part of good user experience is making it easy for visitors to navigate your site.
I see no reason the same shouldn’t apply to crawl bots.