If you are thinking about a deployment of 10GbE connectivity in the rack and in the datacenter you will be probably comparing fiber with copper and then even within the latter there are some alternatives. In internet a strong tendency of using SFP+ (or even the somewhat older CX4) using mostly passive twinax cables can be found.
So why is twinax* better than RJ45 on 10GB switches?
Allegedly because of lower latency and lower power consumption, because the direct attached SFP+ copper cables “use only 0.1 watt of power per connection and introduce only approximately 0.25 microsecond of latency”, which is up to 80 times better, compared to their RJ45 10GBASE-T pendants “that consume 4 to 8W per transceiver and contribute a latency of up to 2.5 microseconds per link” : http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/white_paper_c27-489248.html
This presumption is buoyed with additional information from Table 3 on: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/data_sheet_c78-461802.html
Although a general tendency of lower power consumption can be observed on short distance passive connectors I am asking myself if such values are not arguable: the difference of 0.1W (twinax) against 8W (RJ45) could as well be overblown. At least the tech specs from the SFF 8431 allow for up to 1.5W (on Level II): ftp://ftp.seagate.com/sff/SFF-8431.PDF
On the other side typical 10GBASE-T values cited on internet are 3-4W at 2-4 microseconds latency. In such scenario we have 2-4 times power consumption reduction, bit is this the final truth?
Judging from DELL’s site not exactly. On an obsolete product site we can read 8W per port consumption for a B-TI24X (http://www.dell.com/us/business/p/powerconnect-b-ti24x/pd), but yet if you pick up a recent 24-port 10GB switch and compare the 8024 with the 8024F one, the former having 24x 10GBASE-T ports, the latter 24x SFP+, you will find BTU/hr 811.39 and Max Watt 237.77 compared to the BTU/hr 548.66 and Max Watt 160.78. Well, this is even not factor 2 reduction.
With that said it is once again up to the circumstances to make the right decision. Within a rack twinax would make sense when trying to achieve low latency over short distancies (for example switching Hyper-V cluster to SAN), otherwise it is not exactly necessary**. For distances longer than 55 meters, fiber connection is the only way to keep short latency. But anything in between – especially if you a planning for a multi-purpose access switch that can be also flexibly positioned not only in a rack but in a wall mounted path panel, etc. – I don’t see a reason why not using cat6 (or better) cables. Low latency negligent; did I hear someone mentioning VDI?
* Twinax is also called twinaxial cable by Intel:
**The calculations work only if the different types of switches cost the same (not the case with DELL): Consider having 30 1H Servers with two cables each. At around $80 difference from Cat6 FTP cable you will then have $4800 additional costs. For the whole switch: 811.39 BTU/hr – 548.66 BTU/hr = 262,73 x 0.293071 W/hr = 76.9985 x 8765.81 Hr/yr = 674.9546054 kW x $0.134 KWh (USA average) = $90,44 per anno per switch savings. Having three switches for the 60 cables you will need approximately 18 years to pay off J
*** Special thanks to Ryan Aust for the improvement proposal (s. comments)