Well, after having gone through all the trouble to create something that essentially didn't exist for the public, Cisco was nice enough to create something that was better...in PDF format. Here is the link for their guide...
https://www.cisco.com/c/dam/en/us/td/docs/security/ise/how_to/HowTo-95-Cisco_and_F5_Deployment_Guide-ISE_Load_Balancing_Using_BIG-IP.pdf
Having just completed the process of load balancing nine Cisco ISE Policy Services Nodes(PSN) behind our F5 load balancer I found that one of the most frustrating things was that absolute lack of a publicly available step-by-step guide. This might be something that is simple for someone who is an expert with F5 load balancers, but for a wireless guy with no real F5 or ISE experience it can be a pretty heft challenge.
With that in mind I have decided to write a step-by-step guide documenting the process I used to get everything working. It should be noted that F5 can make significant changes from one version of code to the next so this guide may have to be modified slightly depending on code version. Our Big-IP is currently running code version 11.4.
This is a basic diagram of how the ISE system is connected today. The IP addresses have been changed to protect the innocent. ISE has some requirements that must be met in order to put the PSNs behind any load balancer. First, the ISE nodes have to be configured so that the F5 acts as their default gateway. This means they must be layer-2 adjacent to the F5. Second, Source NAT does not work with ISE. ISE uses the source Network Access Device (NAD) to track RADIUS sessions and to perform Change of Authorizations. Third, RADIUS sessions must be configured for persistence on the F5 through the use of an iRule which I will provide in the step-by-step instructions.
For this deployment I decided that I did not want the traffic between the admin/monitor nodes and the individual PSNs to go through the load balancer since it really wasn't necessary and I wasn't all that sure how to set it up anyway. To accomplish this I set static routes on the ISE PSNs for specific hosts to use the VLAN 1 default gateway rather than the IP address of the F5.
Cisco ISE static route:
ip route 192.168.2.10 255.255.255.255 gateway 192.168.1.1
NOTE: If you are already a master of the F5 and would just like some general guidance on how to configure if rather than a step-by-step guide you can go to the link below. It also provides some background information on why things are configured the way they are that I didn't go over in this post.
https://supportforums.cisco.com/blog/153056/ise-and-load-balancing
Now without further ado...
Step 1:
The ISE PSNs that are to receive load balanced traffic need to be added to the F5 system. You can do this on the F5 by going to Local Traffic -> Nodes -> Node List and creating an entry for each of your PSN nodes.
Step 2:
In order to ensure that the policy services node is reachable and available for authentication and accounting monitoring probes need to be configured on the F5. These probes require an account (AD in our case) that will allow them to verify that the PSNs are connected to back end resources such as Active Directory. These can be created by going to Local Traffic -> Monitors and creating two monitors. The type for the first will be RADIUS and the type for the second will be RADIUS Accounting.
Step 3:
Next the server pools used to load balance RADIUS authentication and accounting traffic need to be created. You can use either round robin or leased connections as your load balancing method. We went with least connections. You can create these pools by going to Local Traffic -> Pool List and hitting the Create button. The screen shots below illustrate the configuration options I have set.
Be sure to add the health monitors created in the last step to their respective pools under the health monitor configuration of the Health Pool.
Step 4:
This step creates iRule that will be used to maintain persistence needs to be configured. I have included my iRule. ISE uses two RADIUS attributes for session tracking and both of them should be included in the iRule. These are calling_station_id and framed_ip. You create these iRule by going to Local Traffic -> iRules -> iRule List and pressing the Create button. Feel free to copy and paste it as your leisure. Just be aware that this iRule may not work with all versions of F5 code.
The iRule:
# ISE persistence iRule based on Calling-Station-Id (Client MAC Address) and Framed-IP-Address (Client IP address)
when CLIENT_DATA {
set framed_ip [RADIUS::avp 8 ip4]
set calling_station_id [RADIUS::avp 31 "string"]
# log local0. "Request from $calling_station_id:$framed_ip"
persist uie "$calling_station_id:$framed_ip"
}
Step 5:
Once the iRule is created a persistence profile has to be is configured. This persistence profile will be used by the RADIUS virtual servers to maintain persistence based on the criteria in the iRule. To create the persistence profile go to Local Traffic -> Profiles -> Persistence and press the Create button. It should be noted that I have seen an alternate version of persistence configuration that involved applying the iRule directly to the virtual server rather than creating a persistence profile. I tried it and it didn't work for me. I can only assume this is something that works differently in different versions of the F5 code.
Lost of stuff to configure on this one. Be sure to select 'Universal' as the persistence type, 'Match Across Services', and add the iRule created in the previous step. The Timeout is more of personal preference, but I did configure it. The Custom check box on the far right has to be checked in order to enable all the options below it.
Step 6:
Now that all that other stuff is set up it's time to set up the virtual servers used to load balance RADIUS traffic. You create virtual servers by going to Local Traffic -> Virtual Servers -> Virtual Server List and pressing the Create button. Since there are a large number of configuration options I will put some explanations between the screen captures.
In the above section a source of 0.0.0.0/0 is used because the load balancer is supposed to receive RADIUS traffic from all network devices. Our network has several difference network management subnets so this was really the only option, but it could be changed to a specific subnet if so desired. The destination is the VIP used for load balancing. The service port in this case is 1812 because this is the authentication server.
It will be necessary to set the configuration mode to advanced to get all the configuration options needed. The big thing on the above screen shot it the RADIUS profile. It should be set to radiusLB_calling_station_id.
In the above example I have All VLANs and Tunnels configured for VLAN and Tunnel Traffic. This can be configured for just the VLANs the load balancer uses to pass traffic. For instance, VLAN 1 on the network diagram could be set here. In fact, that's exactly how I have it configured. It's just not VLAN 1 and photo shopping screen shots is something I'm just not interested in doing.
Under the resources section the Default Pool and Default Persistence profile need to be set up. These were both created in previous steps. Not the iRule section at the bottom. Remember in the persistence profile step that I mentioned adding the iRule directly to the virtual server? This is where you would do that. It didn't work for me, but it could change with the code version.
These next few screen shots are basically the same as the four previous. The only real difference is that this is the configuration for RADIUS accounting so port 1813 is used instead of 1812.
Step 7:
Now that big core load balancing is set up there are some optional configurations for load balancing for things like DHCP, CoA, NMAP and SNMP. None of these are absolutely required, but it's highly likely that you will use one or more of them with ISE. I am using all of them.
Policy NAT configuration:
Unlike RADIUS traffic on ports 1812 and 1813 other things such as CoA and SNMP use source NAT. The only thing that needs to be configured is the server in the member list. This should be configured for the host name of the VIP.
DHCP Profiling:
ISE is capable of using DHCP traffic to profile endpoints and they connect to the network. If you want to use DHCP profiling the load balancer will have to be configured for it. The next several screen captures illustrate how this is done.
The DHCP server pool list is created by going to Local Traffic -> Pools -> Pool List and pressing the Create button. I chose to use a built in ICMP health monitor to track the health status of the ISE nodes. The member list includes all the ISE PSNs behind the load balancer and Round Robin is used for the load balancing technique since the ISE PSNs will share all DHCP information with each other.
The DHCP virtual server is pretty basic. The configurations are basically the same as those used to configure the RADIUS servers previously. Be sure to configure the default pool under the Resources tab to use the DHCP pool configured previously. You create the DHCP virtual servers by going to Local Traffic -> Virtual Servers -> Virtual Server List and pressing the Create button.
RADIUS CoA:
Because the CoA communication will initiate from the ISE nodes this server needs to be configure to accept traffic from the network the ISE PSNs are on. The service port should be 1700 which is used for CoA. The remaining configurations are similar to those used in the radius virtual servers. The two differences being the Source Address Translation method and SNAT pool. Both of these are under the advanced configuration options of the virtual server. You create the CoA virtual servers by going to Local Traffic -> Virtual Servers -> Virtual Server List and pressing the Create button.
SNMP:
SNMP is almost identical to CoA. The only difference is the Service Port used. In this case it will be port 161. The Source Address Translation method and SNAT pool are the same as CoA. You create the SNMP virtual servers by going to Local Traffic -> Virtual Servers -> Virtual Server List and pressing the Create button.
Step 8:
Finally, a virtual server needs to be configured to handle all other inbound traffic that will go through the load balancer and one that will be used to handle all return traffic. You create tthese virtual servers by going to Local Traffic -> Virtual Servers -> Virtual Server List and pressing the Create button.
The return traffic server is intended as a catch-all to handle all other traffic that might pass through the load balancer. The source network is the ISE PSN network and all ports are allowed. The only other thing that needs to be configure is the Protocol Profile. This needs to be set for 'fastL4' which can help improve performance.
The default forward is a catch-all rule designed to handle all traffic destined for the ISE PSNs not specifically covered already. The destination network is set for the ISE PSN network and all ports are forwarded. Once again, the only thing that needs to be configured is the Protocol Profile.
Well, that's about it.I am pretty sure that I will have to tweak this as time goes on, but this is what is currently being used in production and it's been running solid for a few weeks now.