Running CheckDB regularly is important for the integrity of your database. It's suggested that we run this as often as possible, so that we know right away if we have a corruption issue. (this is also a good reason to take regular database and log backups).
The only problem is that CheckDB is very intensive on the storage of your servers. Not only that, but it can take a LONG time for it to complete on very large databases (VLDBs). This can cause some conflicts with your SAN or VM admins.
Here are some ideas on how to address running the consistency check on VLDB's. These aren't perfect and you'll likely have to customize your own solution. But atleast this can get the ball rolling.
Partition the CheckDB into chunks can work. It's not perfect as you won't have a complete picture every night, but over the course of a few days you can know if the database is corruption free or not. If you have multiple files and filegroups, you can utilize the DBCC CHECKFILEGROUP. From there you can tier it, for example, if you have a read-only FG, then only run that once a week. Everything else, run it every day or two.
If you don't have multiple files, you might want to look into addressing that for a VLDB.
Another way of splitting the CheckDB into chunks is to do it by tables. I had a client that had one extremely large table that made up the majority of the database size. So I was able to split up the time accordingly. Eventually, I was able to convince them that we should partition the table itself. Once we did that, I utilized the checkDB filegroup feature noted in the paragraphs above.
Finally, my favorite method is to simply do a restore on another server and perform the CHECKDB on there. I love this method as it allows me to test restores every night (most VLDB's are considered critical in my experience). Secondly if offloads all the work from the main server. This method can easily be automated. I understand that not every environment has the resources for this, but if you do, I would definitely recommend this option.