web-archive-net.com » NET » M » MAJORNETWORK.NET

Total: 263

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Majornetwork ‹ Log In
    Majornetwork Username or Email Password Remember Me Lost your password Back to Majornetwork

    Original URL path: https://majornetwork.net/wp-login.php (2016-04-25)
    Open archived version from archive


  • tls – Majornetwork
    up let me know Oh and don t report not working with IE 6 on Windows XP It s not me it s you Read Post Search for Markku Leiniö Senior Network Architect Senior Technology Consultant and CCIE 26438 Routing Switching in Finland Majornetwork on Twitter Markku Leiniö on Google Your IPv4 IPv6 Status You are using IPv4 address 81 198 240 36 Recent Posts majornetwork net Is Now TLS Enabled IPsec VPN Tunnel between F5 BIG IP and Juniper SRX SoftEther VPN with a VPN Address Pool Juniper SRX IPsec LAN to LAN VPN Part 2 Juniper SRX IPsec LAN to LAN VPN Part 1 Tags 15 0SY 15 1SY ba bridge assurance cat6500 catalyst 6500 cisco cli cmp console cygwin dual homed esxi fabric extender fabricpath fast hello fex hypervisor ios ipsec ipv6 issu juniper junos lacp linux nexus 5000 nexus 5500 nexus 7000 nx os private vlan pvlan qsfp srx srx100 sup2t sup32 sup720 switch profile sxi sxj vmware vpc vpn vsphere Archives October 2015 July 2015 May 2015 February 2015 January 2015 December 2014 November 2014 August 2014 June 2014 May 2014 September 2013 August 2013 July 2013 June 2013 April 2013 March 2013 February 2013

    Original URL path: https://majornetwork.net/tag/tls/ (2016-04-25)
    Open archived version from archive

  • Cisco Nexus FEX Lineup – Majornetwork
    are 1G capable with appropriate SFP modules A usual use case is blade chassis with pass through modules or rack server installation with two 2232PP FEXes you can provide 32 servers like 2 chassis with 16 blades with redundant 10G ethernet and FCoE connectivity The oversubscription rate is 4 1 320G for hosts 80G for fabric Nexus 2232TM Nexus 2232TM available since 2011 has 32 host ports which are 1 10G copper RJ 45 ports It does not support FCoE The fabric ports are implemented in a 8 port SFP module thus the M designation I m not aware of any other module options here however The use case for 2232TM FEX is a 10Gbase T server deployment Nexus 2232TM E Nexus 2232TM E is an enhanced version of 2232TM available since 2012 It supports FCoE on 10Gbase T with Cat6a 7 cabling for up to 30 meters Nexus 2248PQ Nexus 2248PQ available since 2013 is not to be confused with other previous 2248 variants This has 48 host ports which are all SFP ports supporting 1G and 10G pluggable modules FCoE is supported with 10G The 4 fabric ports are QSFP ports thus the Q designation The QSFP ports can be used in either 40G or 4x10G mode In addition to the usual QSFP pluggables including the 40G to 4x10G splitters there is also a Cisco specific FET 40G module which is similar to the FET 10G low cost limited range limited use but for 40G fabric connectivity The oversubscription rate is 3 1 480G for hosts 160G for fabric Nexus 2348UPQ Nexus 2348UPQ was announced in August 2014 It has 48 SFP host ports and 6 QSFP fabric ports The U in the designation comes from Unified meaning that the host ports also support 2G 4G 8G 16G Fibre Channel with appropriate optics At the moment it is hardware support only the NX OS software support will come later 16G FC can only be used on 24 ports FCoE is supported as usual with the 10G FEXes 4 of the QSFP fabric ports can also be used for host connectivity using either 40G connectivity or splitting them to 4x10G ports Software support for this is also yet coming The use case for 2348UPQ FEX may include situation where you have a mixed Ethernet FCoE FC server implementation or if you have otherwise need to distribute some of your native FC ports farther from your Nexus parent switch The oversubscription rate is 2 1 480G for hosts 240G for fabric Nexus 2348TQ Nexus 2348TQ was announced in October 2014 and is similar to the Nexus 2348UPQ above The major difference is that it has 48 10GBase T 1G 10G copper host ports instead of SFP FCoE is supported for Cat6a 7 cables up to 30 meters Minimum required NX OS version is 7 1 0 N1 1 for Nexus 5500 5600 6000 series switches Nexus 2332TQ Nexus 2332TQ was announced in January 2015 and is in turn similar to

    Original URL path: https://majornetwork.net/2014/08/cisco-nexus-fex-lineup/ (2016-04-25)
    Open archived version from archive

  • Cat6500/6800 IOS 15SY Feature and Packaging Information – Majornetwork
    edge implementations IP Services brings you these features Easy Virtual Networks EVN EIGRP OSPF GLBP NAT VRF Lite NHRP BGP BGP PIC Advanced IP Services is needed for these Advanced Multicast mVPN IS IS MPLS L3 VPNs IP SLA ACL Dry run ACL Atomic Commit L3VPN o mGRE EoMPLS OSPFv3 IPSec Authentication Finally Advanced Enterprise Services adds these VPLS Native A VPLS DLSW Updated June 28 2014 17 35 Tags 15 0SY 15 1SY cat6500 cat6800 catalyst 6500 catalyst 6800 cisco ios sup2t Previous Post Next Post Leave a Reply Cancel reply Search for Markku Leiniö Senior Network Architect Senior Technology Consultant and CCIE 26438 Routing Switching in Finland Majornetwork on Twitter Markku Leiniö on Google Your IPv4 IPv6 Status You are using IPv4 address 81 198 240 36 Recent Posts majornetwork net Is Now TLS Enabled IPsec VPN Tunnel between F5 BIG IP and Juniper SRX SoftEther VPN with a VPN Address Pool Juniper SRX IPsec LAN to LAN VPN Part 2 Juniper SRX IPsec LAN to LAN VPN Part 1 Tags 15 0SY 15 1SY ba bridge assurance cat6500 catalyst 6500 cisco cli cmp console cygwin dual homed esxi fabric extender fabricpath fast hello fex hypervisor ios ipsec

    Original URL path: https://majornetwork.net/2014/06/cat65006800-ios-15sy-feature-and-packaging-information/ (2016-04-25)
    Open archived version from archive

  • Hostname or Path? Does It Matter? – Majornetwork
    the correct service to each of the servers You can have as many servers as you want the load balancer will take care of balancing the load raise your hand now if you don t know what I m talking about In the hostname case the load balancer implementation is equally simple You just point both of the hostnames runner majornetwork net and walker majornetwork net to the load balancer apply some configurations and now the load balancer again can do its job properly with no difficulties separating the two services to different servers this time using the destination IP address or the HTTP Host header in the HTTP request An important thing to understand here is that while the actual servers behind the load balancer can be located anywhere in the Internet the data flow will always go via the load balancers So the load balancers can be in the customer s datacenter and the servers or some of the servers can be in the cloud but the users requests and servers responses will always go through the customer s datacenter Now when you compare the hostname and path cases you realize that if you choose the hostname way of naming your service you can implement a separate load balancer in the IaaS platform for runner majornetwork net and another load balancer in the customer datacenter for walker majornetwork net and both applications will run smoothly You can add server capacity for both applications as you need The content will always be served from the same datacenter where the request came in originally No problem here If you choose the path way of naming the service then you may have a problem the requests will always come to the datacenter where the load balancer hosting the majornetwork net is and all the data will flow through that datacenter This is not a problem in every case but imagine that the Runner application majornetwork net runner is hosted in the IaaS platform and is gaining popularity All the requests will still come to the customer s datacenter because majornetwork net points there and the datacenter connection capacity may become a problem That could lead to disaster as the users are not able to get the content appropriately It does not help to add more servers in the IaaS platform because the bottleneck is still in the connectivity of the customer datacenter not in the processing power in the IaaS platform In addition to the capacity issues the latency will also be higher when the connections are being routed here and there That may affect the user experience even with small loads Can I Balance the Load to Different Datacenters As a generic answer yes It is called global load balancing In global load balancing the content is usually served to the user from the nearest available server The concept of nearest is somewhat relative but in the most basic case the global load balancing decision is made based on the

    Original URL path: https://majornetwork.net/2014/05/hostname-or-path-does-it-matter/ (2016-04-25)
    Open archived version from archive

  • Home Computing History – Majornetwork
    0 OS 2 1998 K6 2 266 64 MB 3800 850 MB 10 Mbps Windows NT 2000 K6 2 450 128 MB 15 GB 100 Mbps Windows 2000 2002 Duron 1 2 512 MB 40 GB LAN 100 Mbps WLAN 54 Mbps Windows XP 2004 Celeron 1 3 512 MB 40 GB 100 GB WLAN 54 Mbps LAN 100 Mbps Windows XP 2008 Core2Duo 2 2 2 GB 2 GB 4 GB 160 GB 500 GB 128 GB SSD 500 GB 2 TB NAS bunch of 100 1000 MB USB disks around WLAN 54 300 Mbps LAN 1 Gbps Windows XP Windows 7 2014 Apparently there has been some development in all technology areas I ve been a laptop WLAN user at home since 2004 Nowadays I use the gigabit LAN on my laptop only for occasional large data transfers My faithful Lenovo ThinkPad T61 has served me well from 2008 and I don t have immediate plans for replacing it Actually I have an order for an ExpressCard USB3 adapter just waiting let s see how much faster USB drives work with it I have to admit that nowadays the Flash based websites are getting me annoyed more and more with this computer I can see that the responsiveness is much better with my work laptop a ThinkPad T430s with Core i7 2 9 GHz processor Good laptops are not cheap however let s see next year what happens About the networking history in 1998 I got a 10 Mbps Ethernet connection to Internet until then I had only had dialup connections For a couple of years I then ran a Linux server as a router Actually I don t remember ever running my workstation connected directly to the always on Internet connection other than just for a moment for testing or so I ve always had a router with NAT no luxury of having a routed IPv4 subnet and firewall After some years I bought a Buffalo WLAN router for the gateway Various Buffalos late times with DD WRT software did the job until 2012 when I got the Juniper SRX100 for routing and firewalling after upgrading the Internet connection to 50M 10M Home LAN apart from the router has been gigabit since 2010 This year I setup a VMware ESXi host and bought a couple of small and cheap but managed Netgear GS108Tv2 gigabit switches to enable VLANs as well The storage situation is being changed Until now I ve had local disks and then a NAS drive and USB disks for backups I m thinking about getting a proper NAS with NFS iSCSI connectivity for ESXi It would make securing the data much easier Let s see how I finally decide The switches and other stuff don t play any visible role in the apartment they are all placed out of sight That s why I m having usually only WLAN access on my desk the nearest Ethernet jack or switch is not conveniently located without

    Original URL path: https://majornetwork.net/2013/09/home-computing-history/ (2016-04-25)
    Open archived version from archive

  • QSFP+ Specifics on Nexus 5500 and Nexus 6000 Series Switches – Majornetwork
    x 10G ports not 4 x 40G ports The corollary of this is that when you connect a 2248PQ FEX to a Nexus 6004 switch you need to first reconfigure the correct switch ports to 4 x 10G mode before you can access the FEX because the QSFP ports on Nexus 6004 are in the 1 x 40G mode by default whereas the only supported mode for 2248PQ is the 4 x 10G mode This seems to be a FAQ as Cisco TAC has published a separate document about it Nexus 2248PQ FEX Connection Problem with a Nexus 6000 40G QSFP Port cisco com Here is a configuration example from the NX OS Interface Operations Guide cisco com N6004 TME3 config interface breakout slot 2 port 1 3 map 10g 4x N6004 TME3 config interface breakout slot 2 port 7 12 map 10g 4x N6004 TME3 config poweroff module 2 N6004 TME3 config 2013 Jan 2 23 30 40 N6004 TME3 VDC 1 PFMA 2 MOD REMOVE Module 2 removed Serial number FOC16422P28 N6004 TME3 config no poweroff module 2 N6004 TME3 config show interface brief incl Eth2 Eth2 1 1 1 eth access down SFP not inserted 10G D Eth2 1 2 1 eth access down SFP not inserted 10G D Eth2 1 3 1 eth access down SFP not inserted 10G D Eth2 1 4 1 eth access down SFP not inserted 10G D Eth2 2 1 1 eth access down SFP not inserted 10G D Eth2 2 2 1 eth access down SFP not inserted 10G D Eth2 2 3 1 eth access down SFP not inserted 10G D Eth2 2 4 1 eth access down SFP not inserted 10G D Eth2 3 1 1 eth access down SFP not inserted 10G D Eth2 3 2 1 eth access down SFP not inserted 10G D Eth2 3 3 1 eth access down SFP not inserted 10G D Eth2 3 4 1 eth access down SFP not inserted 10G D Eth2 4 1 eth access down SFP not inserted 40G D Eth2 5 1 eth access down SFP not inserted 40G D Eth2 6 1 eth access down SFP not inserted 40G D Eth2 7 1 1 eth access down SFP not inserted 10G D Eth2 7 2 1 eth access down SFP not inserted 10G D Eth2 7 3 1 eth access down SFP not inserted 10G D Eth2 7 4 1 eth access down SFP not inserted 10G D output omitted Eth2 12 1 1 eth access down SFP not inserted 10G D Eth2 12 2 1 eth access down SFP not inserted 10G D Eth2 12 3 1 eth access down SFP not inserted 10G D Eth2 12 4 1 eth access down SFP not inserted 10G D N6004 TME3 config In thix example the first three QSFP ports were reconfigured in 4 x 10G mode and also the ports 7 12 That left ports Eth2 4 Eth2 5 and Eth2 6 in the original

    Original URL path: https://majornetwork.net/2013/08/qsfp-specifics-on-nexus-5500-and-nexus-6000-series-switches/ (2016-04-25)
    Open archived version from archive

  • Originating Default Route in OSPF in Junos – Majornetwork
    0 Access internal 12 00 56 39 to 85 xxx xxx 1 via ge 0 0 0 0 Aggregate 130 00 25 11 Discard Note that the discard route is not active at the moment because the ISP supplied DHCP route has better preference 12 than the generated route 130 Second I needed a policy that takes the default route and accepts it admin SRX show policy options policy statement DEFAULT ORIGINATE term DEFAULT ROUTE from route filter 0 0 0 0 0 exact then accept Finally I attached the policy in the OSPF process admin SRX show protocols ospf export DEFAULT ORIGINATE area 0 0 0 0 interface ge 0 0 14 0 interface ge 0 0 15 0 Verification in the Nexus switch NEXUS01 sh ip route ospf 0 0 0 0 0 ubest mbest 1 0 via 10 1 0 1 Eth1 1 110 0 00 23 13 ospf 1 type 2 Thanks Per Westerlund PerWesterlund and Marko Milivojevic icemarkom for commenting my questions regarding these configurations Updated July 5 2013 20 54 Tags juniper junos ospf routing Previous Post Next Post 3 Comments Add a Comment Per Westerlund July 5 2013 at 17 59 I believe the next hop self is mostly relevant with BGP with OSPF it will be enough just to accept Announcing as an external route with OSPF implicitly says send to me Reply Markku Leiniö July 5 2013 at 18 03 Good point forgot to check that I ll remove it here Reply Markku Leiniö July 6 2013 at 13 16 As shown above the route type will be external type 2 by default If you want to get type 1 instead add this to the policy set policy options policy statement DEFAULT ORIGINATE term DEFAULT ROUTE then external type 1 Reply

    Original URL path: https://majornetwork.net/2013/07/originating-default-route-in-ospf-in-junos/ (2016-04-25)
    Open archived version from archive



  •